Thursday, July 21, 2005
I read a great paper from Gary Klein in Cognition, Technology and Work on problem detection. The great thing about the paper is that even though I had not thought about much of what he discusses, everything he says can be explained by my cognition model. I love it when that happens.
Here is the essence. Traditional problem detection is modeled as an accumulation of evidence that things are not as they should be. When it reaches a threshold, a problem is detected. Gary says that this is just one case. Instead, problem detection should be modeling as a general sense-making process. We are always trying to maintain situation awareness. Problem detection is basically a shift from one schema to another to explain the current situation. There are many ways that this shift develops.
The Recognition Primed Decision making model contradicts the traditional evaluation of alternatives model of decision because we don't activate multiple schemas and compare them. Instead, the evidence activates cell assemblies until a schema that matches this pattern reaches threshold.
From a problem detection point of view, we start out with this situation schema activated. Contradictory evidence can be experienced and handled in several ways. In the traditional special case, small pieces of contradictory evidence can be modeled as cell assemblies with inhibitory connections with the active schema. If this inhibition accumulates, it can activate the mismatch schema and cause the person to reconsider the situation. The problem with this is that we have a strong tendency to explain away, or even ignore, contradictory evidence. So unless a huge amount of contradictory evidence is experienced, the reconsideration may never happen.
Another way that problems can be identified is the detection of one large contradictory symptom. This activates a strong inhibitory link to the schema and activates the mismatch all at once. This is more likely because it is harder to explain away the strong contradiction.
A third way that problems can be identified is the detection of a small contradictory symptom that the person chooses to believe and investigate. This can happen because of what Gary calls stance. He defines stance as the emotional state that the person starts out with. If someone is generally suspicious or has external incentives to identify problems, the mismatch schema may start out primed. This facilitates the activation of the mismatch schema from less contradictory evidence.
One final point that I want to discuss is his contention that when a person repeatedly explains away anomalies, he/she is less likely to recognize future contradictory evidence. What is happening here is that the mismatch schema is being inhibited itself. This can also occur based on stance (a generally accepting personality, external incentives to maintain the status quo, fear of ones own inexperience, etc.). If the mismatch schema is inhibited, it cannot reach threshold unless something very strongly contradictory is detected.
For any of these cases, even when there is a gradual activation of small inhibitory links, it is not a conscious process. At one point in time, the pattern recognition process has activated one schema. At some point, another schema becomes activated, thus inhibiting the original. It is a binary shift from one to the other.
There is a lot more of interest in the paper. I recommend reading it.
Klein G., Pliske R. and Crandall B. (2005). Problem detection. Cognition, Technology, and Work, 7, 14-28.
Wednesday, July 13, 2005
A company that sells outdoor equipment found that when people are shopping for kayaks, they are much more interested in performance data than when they shop for clothing, where they want information on popularity data. So on their web site, they have two different layouts. For equipment, they highlight the performance data. For clothing they highlight the popularity data. Simple human factors design - they draw attention to the information that has stronger connections in the customers' product-need schema. If target customers buy clothing to look cool, then their need is for cool clothes. So the site design needs to quickly steer them towards the relevant coolness factors.
Cookie cutter design does not work.
Thursday, July 07, 2005
We are all becoming bloggers (heck if a luddite like me will do it, anyone will). I decide to report on interested information about human factors when I see it. Other people blog about politics, or just keep daily journals on line. The net (pun intended) effect is that anyone can find information about anything. But there are very few indications of the quality of these posts. The major check on blog quality is that anyone can post a response right there on the blog. So if there are lots of posted objections, the information may not be true.
But from a human factors perspective, we can see a potential limitation of this quality check. How many of us read the responses? I read lots of blogs, but I don't even glance at the responses. They could be posting complete garbage and I would never know. I judge based only on the credentials of the author. And in some cases, I judge based on whether I like what I am reading. If I agree with it, it must be right. So we all gravitate towards the blogs we agree with. There are fewer and fewer disputations because no one of a competing opinion is reading the blog.
This is one of the reasons why the intensity of political debate has gone up exponentially. Learning takes place when we read something that challenges our existing schema, intelligently analyze the contents, and then either modify our schema or reject the new information. What we are doing now is simply reading content that supports our existing schema, thereby strengthening them. This is a weak form of learning. And if we don't ever look for competing perspectives, it is very ineffective. Plus, our schema get so strong they become impossible to overcome. We become so set in our opinions that everyone with a competing opinion seems like a complete fool.
Wednesday, July 06, 2005
(http://www.goodexperience.com/blog/archives/000223.php) about some results of an evaluation of some common ecommerce web sites. Here are the general problems they found:
1. Content groupings that reflect the company's view of the business, not the customer's view.
I don't know why this is still a problem. We have known for years it is bad design. My undergrad students figure it out by week 3. For some great proof, check out Julian Sanchez' masters thesis at Florida International University.
2. Navigation that hides important categories
Here is another no-brainer. If there is content that people want to see, why hide it? Make it salient !!
3. Confusing product images
I suspect that this one is as much about poor marketing as it is poor usability. If companies either rush to get their photos out there, or don't invest in good technology, the images won't do what they are supposed to do, which is clearly to support product evaluation. If the customer can't tell what the product looks like (stylistically or functionality), then how do they know if they should buy it? The specs for the image should be pretty easy to figure out and test.
4. Missing product information
This is the same issue as #3. Analyze the customers' decision making processes (there will be many) and make available any information that they need. I have debated with Eric Goldman (a marketing professor at Marquette University) about whether we give them the information they think they want, or just the information that would really help them make a better decision. He is very sure it should be latter. I agree to some extent, but customer satisfaction increases with the former.
5. Important information not being presented at contextually relevant points in the process
Strangely enough, this one I understand. It is difficult to know the decision making processes of the customer, primarily because they are all different. Some will want to filter on price first, but others will filter on price last. So when do you present it? The trick is to be flexible. Let the customer decide in real time (using dynamic page design and product databases) what data to access.
6. Difficult product-comparison functions
This one is partly a lack of human factors expertise, and partly a love of complex technology that looks cool. Many companies don't realize how important human factors is in product comparison. They either put huge tables of data that are hard to parse, or they have each product separately and require customers to pogo stick (see Jared Spool's work with UIE) up and down the hierarchy to read it all.
So in the end, it just comes down to using a good design process, understanding your customers' cognitive processes, and testing, testing, testing.