Friday, November 04, 2005
On my way to Stanford Business School for a meeting last week, I got off the Caltrain at the Palo Alto station. It was a typical commuter rail station layout. There were tracks on two sides - running lengthwise past the station in both directions. On one of the ends was a bus station. On the other end was the parking lot. So to get to a street, you had to walk about 500 feet or so either across the parking lot or past the bus stop. With my Google Map in hand, I looked around for a sign to tell me either which street was in which direction or at least north or south. No luck.
One of the basic principles I use in my HF design is to minimize risk to the user. In this case, there is significant risk that increases the need for good signage.
1. There is time risk: it takes a long time to walk across the parking lot, only to discover it is the wrong way and you have to walk all the way back and then past the bus stop.
2. There is effort risk: all this walking is tough with two suitcases.
3. There is safety risk: my train arrived at midnight and there were some shady characters around.
4. There is independence risk (see previous post): In order to find out where to go, I had to approach a stranger and ask.
5. There is dignity risk (see previous post): I looked very foolish walking around the station for ten minutes trying to find a sign or at least a street name, all the while studying my Google Map.
Context is also important in this case. When designing a train station, I think it is obvious that many users will be tired from traveling (the Caltrain line connects to three airports as well). The trains run late, so many passengers will be doing this at night, and probably hungry (I know I was). And train stations at places like Stanford are even more sensitive because there are lots of people arriving for the first time (conferences, new students, visitors, etc.) who will be unfamiliar with the area.
Tuesday, October 25, 2005
I was reading a great ethnographic paper on how people use products and what happens when products don’t meet our needs. In one case, an elderly user had several clocks in one place. A digital clock was big enough to read, but she didn’t know how to set the alarm. An analog clock was not readable, but she could set the alarm. The things we will do to accomplish what we need!! Interestingly, she did not blame the product designers, but instead felt incompetent.
Another example was a short person who could not reach the cabinet over the stove. So she grabbed a broom and knocked the item she needed out of the cabinet. In the process, she knocked a few others out too, creating a fire hazard when the stove was on (or if she forgot to remove it).
The purpose of the paper was to highlight two very important user requirements that we often forget about:
Dignity – the product should not make you look foolish when completing whatever tasks you need to complete. Even when someone can accomplish a task quickly and accurately, the product fails when the user looks foolish.
How many of us even measure these when we test our products? How many of us have our own stories like the two I described above? I know I do !!
Saturday, October 01, 2005
I heard this quote last week at the HFES Conference. It is so typical, but I like the contrast with the other cliches. So many systems add more and more features for every new version/model/or whatever come out. Its like the Peter Principle. If the system is usable, more features are added. When it is finally not usable anymore, they stop.
I heard another quote which was from a doctor during laparascopic surgery. He said "move it down, up the axis." What the heck does this mean? The other doctor didn't know either. The presentation was a study of communication during surgery. There were lots of other examples like this. Even simple statements like "move it to the left" are ambiguous because you can't tell if he meant his left, your left, or the patient's left. It makes me very afraid to get sick.
- I bumped into a usability engineer I had met at a previous conference and got a lead on a consulting gig for his company.
- I met an old colleague who is writing a book and we discussed me perhaps co-authoring it with him.
- I met a contact from NASA who may be able to use my help on a project in January.
- I bumped into a usability engineer who I have known for several years and mentioned that I would be in his city next month. He asked me to give a presentation to his group, which could possibly lead to a future collaboration.
- I met a Canadian consultant who may be able to introduce me to some Canadian companies with expertise that I need.
- In a hallway discussion with my grad school advisor, he gave me the contact information for a Korean lab that is working on something I am also trying to do. I never even thought to ask him.
I also helped out a few people:
- I found out a student chapter needed help creating some marketing materials and I offered to help.
- I asked a colleague to apply for a job at my institution, and he decided to apply.
- I found a summer internship for one of my students.
- I helped a student at another school decide on a grad school to attend.
And all this was just on the first day!! On the other hand, there are some things that go nowhere. At one lunch I attended, I spend 90 minutes talking to a guy about a potential project and he wouldn’t even give me the time of day. And who knows what will happen with the 10 items listed above. But it was worth the five days in
Thursday, September 22, 2005
A have recently become interested in the concept of search repositories. The idea is that when we are searching, a lot of information is found and then lost. How many times have you skipped a link that didn't seem relevant only to go back later and try (unsuccessfully) to find it. This is also true of links that we find in content pages that we navigate through during the search. It can even be true of content that is not a link.
Wouldn't it be great if there was some unobtrusive repository somewhere that we could review when needed that would show an organized display of everything we have seen relevant to a search activity, even if it spans several sessions? Even better if we could use it to create reports, divide it into subsections for later use, conduct relevance analysis, data mine it for internal consistencies, share it with others, have it automatically updated and set up alerts when new content is added.
Anyone know of a product that does this? Or want to help me create one?
Tuesday, September 20, 2005
At the Kauffman entrepreneurship workshop last week, a speaker made the comment that once you do something entrepreneurial once, you become aware of opportunities everywhere you go. This is a great example of RPD in action. A success of that nature (with lots of personal risk, hard work, time, etc.) will create a strong, large, and strongly interconnected schema. So it is very likely that new experiences will be evaluated using the entrepreneurship schema as the basis if they have even a little in common with it.
The same thing happens with my students after their HF class. Their class project is also very time consuming, effortful, challenging, and often fun. So for the rest of their lives, they see solvable usability problems everywhere they go.
I was thinking about the way many of the congregations I have attended pray and why I often find it unrewarding. The congregants get really used to saying the same prayers and singing the same songs over and over until it becomes aggregated into a single composite automatic schema. It becomes possible to say the prayer or sing the song with no attention to the words. After a while, you can ask the "expert" what the words MEAN, and they would have to think about it. They might not ever even have learned that part. The schema have strong links to the sensory, and perhaps the emotional areas of the brain, but they lose the links to the semantic.
Maybe I am different, but I would rather concentrate on the semantics.
I just got back from a workshop on how to teach entrepreneurship using Experiential learning techniques. It was sponsored by the Kauffman Foundation, which is one of the major non-profit sources of entrepreneurship support. The workshop was excellent. But why do I bring it up in my HF blog? Two reasons:
1. As anyone in the HF field should no, experiential learning is powerful for many cognitive reasons. Experiental learning is more salient so the schema that you are trying to create get activated much stronger and lead to more learning. Experiential learning is also linked to more existing schema, so it is easier to encode, easier to store, and easier to recall when needed. Experiential learning is also more motivating because it is fun. At least when it is done right. I have seen the Phil Donohue method where the prof walks up to a student who clearly doesn't want to respond and forces the issue. I don't recommend this unless you have amazing interpersonal skills and a soft touch.
2. The second reason is that we need to know more about entrepreneurship in our field. Entrepreneurship is about pursuing opportunities, leveraging resources (especially other people's resources), and meeting market needs. How many of us could use more of this in our work.
Tuesday, September 13, 2005
One in particular caught my attention. Out of the 100 largest global companies, only 8 met usability requirements and only 5 met advanced usability requirements. How sad is this? As an investor as well as a usability engineer, it makes me wonder. The best investing is based on sound research, but how can you do sound research if the sources are not well done? Maybe I'll put my money under my mattress after all.
It makes me think of the book chapter that I just wrote for Mike Wogalter's new book on Warnings. Not to plug myself (well, maybe a little :-D), but part of what I cover there is the risk of complicated legal documents and contracts. There is not a lot of research on how to make these easy to read, but there is enough to meet the corporate annual report guidelines (see http://www.emediawire.com/releases/2005/9/emw282111.htm for the list). It is really sad that they don't.
Thursday, September 08, 2005
Some of the new systems that are described there are really amazing. Some randoms programmers in basements (or probably in their offices while procrastinating their real work) are creating some great functionality. The newsletter focuses on the facility of Web 2.0 to handle these systems. But my thoughts are on the insight of these people in guessing what functionality people would like to have. Of course, it is a statistical process. If a thousand people create some kind of new system, 990 will fail because of poor insight and the other 10 will be famous. Does this mean they are smart or just lucky? But I don't care either way. As long as the functions are there, I will use them.
Monday, August 22, 2005
I saw a great example of identifying user needs and then finding a way to deliver them. There is a company designing tombstones with solar powered video monitors. You can record a memorial or testimonial video to your loved one, and store it in perpetuity on the tombstone.
It is solar powered, because of course you don't want to have to go periodically and put in new batteries.
It has an earphone jack, but no external volume. This way, you can listen if you want to, but there is no way to disturb anyone at (in?) nearby graves. Of course, you have to remember to bring headphones. If you don't know the video is there, you may lose out on the opportunity. But I have a sense that in another few years, we will all have headphones for our cell phone/PDA/music player combos anyway.
From what I could tell, there are either no controls at all (continuous looping) or there is just a start button and nothing else. This is probably better because it would really waste the solar infrastructure to have continuous loop for something that is only viewed a few minutes every long while.
Thursday, July 21, 2005
I read a great paper from Gary Klein in Cognition, Technology and Work on problem detection. The great thing about the paper is that even though I had not thought about much of what he discusses, everything he says can be explained by my cognition model. I love it when that happens.
Here is the essence. Traditional problem detection is modeled as an accumulation of evidence that things are not as they should be. When it reaches a threshold, a problem is detected. Gary says that this is just one case. Instead, problem detection should be modeling as a general sense-making process. We are always trying to maintain situation awareness. Problem detection is basically a shift from one schema to another to explain the current situation. There are many ways that this shift develops.
The Recognition Primed Decision making model contradicts the traditional evaluation of alternatives model of decision because we don't activate multiple schemas and compare them. Instead, the evidence activates cell assemblies until a schema that matches this pattern reaches threshold.
From a problem detection point of view, we start out with this situation schema activated. Contradictory evidence can be experienced and handled in several ways. In the traditional special case, small pieces of contradictory evidence can be modeled as cell assemblies with inhibitory connections with the active schema. If this inhibition accumulates, it can activate the mismatch schema and cause the person to reconsider the situation. The problem with this is that we have a strong tendency to explain away, or even ignore, contradictory evidence. So unless a huge amount of contradictory evidence is experienced, the reconsideration may never happen.
Another way that problems can be identified is the detection of one large contradictory symptom. This activates a strong inhibitory link to the schema and activates the mismatch all at once. This is more likely because it is harder to explain away the strong contradiction.
A third way that problems can be identified is the detection of a small contradictory symptom that the person chooses to believe and investigate. This can happen because of what Gary calls stance. He defines stance as the emotional state that the person starts out with. If someone is generally suspicious or has external incentives to identify problems, the mismatch schema may start out primed. This facilitates the activation of the mismatch schema from less contradictory evidence.
One final point that I want to discuss is his contention that when a person repeatedly explains away anomalies, he/she is less likely to recognize future contradictory evidence. What is happening here is that the mismatch schema is being inhibited itself. This can also occur based on stance (a generally accepting personality, external incentives to maintain the status quo, fear of ones own inexperience, etc.). If the mismatch schema is inhibited, it cannot reach threshold unless something very strongly contradictory is detected.
For any of these cases, even when there is a gradual activation of small inhibitory links, it is not a conscious process. At one point in time, the pattern recognition process has activated one schema. At some point, another schema becomes activated, thus inhibiting the original. It is a binary shift from one to the other.
There is a lot more of interest in the paper. I recommend reading it.
Klein G., Pliske R. and Crandall B. (2005). Problem detection. Cognition, Technology, and Work, 7, 14-28.
Wednesday, July 13, 2005
A company that sells outdoor equipment found that when people are shopping for kayaks, they are much more interested in performance data than when they shop for clothing, where they want information on popularity data. So on their web site, they have two different layouts. For equipment, they highlight the performance data. For clothing they highlight the popularity data. Simple human factors design - they draw attention to the information that has stronger connections in the customers' product-need schema. If target customers buy clothing to look cool, then their need is for cool clothes. So the site design needs to quickly steer them towards the relevant coolness factors.
Cookie cutter design does not work.
Thursday, July 07, 2005
We are all becoming bloggers (heck if a luddite like me will do it, anyone will). I decide to report on interested information about human factors when I see it. Other people blog about politics, or just keep daily journals on line. The net (pun intended) effect is that anyone can find information about anything. But there are very few indications of the quality of these posts. The major check on blog quality is that anyone can post a response right there on the blog. So if there are lots of posted objections, the information may not be true.
But from a human factors perspective, we can see a potential limitation of this quality check. How many of us read the responses? I read lots of blogs, but I don't even glance at the responses. They could be posting complete garbage and I would never know. I judge based only on the credentials of the author. And in some cases, I judge based on whether I like what I am reading. If I agree with it, it must be right. So we all gravitate towards the blogs we agree with. There are fewer and fewer disputations because no one of a competing opinion is reading the blog.
This is one of the reasons why the intensity of political debate has gone up exponentially. Learning takes place when we read something that challenges our existing schema, intelligently analyze the contents, and then either modify our schema or reject the new information. What we are doing now is simply reading content that supports our existing schema, thereby strengthening them. This is a weak form of learning. And if we don't ever look for competing perspectives, it is very ineffective. Plus, our schema get so strong they become impossible to overcome. We become so set in our opinions that everyone with a competing opinion seems like a complete fool.
Wednesday, July 06, 2005
(http://www.goodexperience.com/blog/archives/000223.php) about some results of an evaluation of some common ecommerce web sites. Here are the general problems they found:
1. Content groupings that reflect the company's view of the business, not the customer's view.
I don't know why this is still a problem. We have known for years it is bad design. My undergrad students figure it out by week 3. For some great proof, check out Julian Sanchez' masters thesis at Florida International University.
2. Navigation that hides important categories
Here is another no-brainer. If there is content that people want to see, why hide it? Make it salient !!
3. Confusing product images
I suspect that this one is as much about poor marketing as it is poor usability. If companies either rush to get their photos out there, or don't invest in good technology, the images won't do what they are supposed to do, which is clearly to support product evaluation. If the customer can't tell what the product looks like (stylistically or functionality), then how do they know if they should buy it? The specs for the image should be pretty easy to figure out and test.
4. Missing product information
This is the same issue as #3. Analyze the customers' decision making processes (there will be many) and make available any information that they need. I have debated with Eric Goldman (a marketing professor at Marquette University) about whether we give them the information they think they want, or just the information that would really help them make a better decision. He is very sure it should be latter. I agree to some extent, but customer satisfaction increases with the former.
5. Important information not being presented at contextually relevant points in the process
Strangely enough, this one I understand. It is difficult to know the decision making processes of the customer, primarily because they are all different. Some will want to filter on price first, but others will filter on price last. So when do you present it? The trick is to be flexible. Let the customer decide in real time (using dynamic page design and product databases) what data to access.
6. Difficult product-comparison functions
This one is partly a lack of human factors expertise, and partly a love of complex technology that looks cool. Many companies don't realize how important human factors is in product comparison. They either put huge tables of data that are hard to parse, or they have each product separately and require customers to pogo stick (see Jared Spool's work with UIE) up and down the hierarchy to read it all.
So in the end, it just comes down to using a good design process, understanding your customers' cognitive processes, and testing, testing, testing.