Monday, May 28, 2007

moral decisions

I have blogged on this before, but I just read an article in the Washington Post that made me want to revisit the issue. As with emotional decisions, moral decisions seem to draw on different parts of the brain than intellectual decisions. So rather than integrating moral and intellectual attributes into an overall cost-benefit kind of decision, two different responses get activated (if the moral and intellectual answers diverge) and we are forced to decide which one to accept.

This is one of the consequences of the way our brains evolved. Since only one response is possible, it would be nice if we could integrate the two sets of criteria and come up with one overall best response. And we try to do this in reality (I have a whole course dedicated to doing it for public policy analysis). But it is so hard because of this brain structure.

System designers have to be careful when their designs have a moral or emotional component. If users will have moral or emotional responses, the decision they make when using our systems may not be what we expect. It is important for more of us to learn what we can about emotion and moral cognition.

Friday, May 18, 2007

innovation and usability

I met with two budding entrepreneurs this morning. I won't steal the thunder of their launch by telling you what their business is about, but it is a good niche application of social networking. As any regular reader of this blog knows, social networking is a passion of mine. I honestly believe that social networks and professional communities of practice will be the true future of the web. My paper at last year's Human Factors Conference presents my views in more detail (email me for a copy if interested).

This morning, we focused on the two main things that a web-based business (and really any business) need to provide to be successful. They have to:

1) provide some useful function that users are willing to come to your site for, preferably over and over again.

2) do this in a way that is easy to figure out and simple to use. Otherwise, they will give up pretty quickly and try somewhere else to satisfy this need.

For social networks, this means providing information, links, communication, and other services in a way that protects users' privacy, is easy to filter, bubbles the good stuff quickly and reliably to the top, and provides a strong sense of that community. This can be accomplished in many ways, for example through reputation management, meta moderation, social and semantic filtering, and other algorithmic techniques. Add good information architecture and support discovery. Of course, this is easier said than done, but Peter Morville's work (Information Architecture, Ambient Findability) are both good places to start (or you can take my course at Florida International University).
A recurring challenge to human decision making is that when an alternative is inconvenient or undesirable, we are very good at convincing ourselves that it won't happen. For example, when the dog is barking late at night, we could worry that there is a burglar, but instead, we assume it is a bird or raccoon and yell at the dog to shut up. Hopefully, it really is a bird, or at least the burglar won't steal much.

Unfortunately, this challenge manifests in proportion to the importance of the decision. For really really really important situations, the inconvenience is likely to be huge (war for example). One that we are seeing more now is global warming. This recent article in Fortune explains it well. The conclusion: "To the extent that dealing with global warming is a) expensive and b) inconvenient, it isn't going to happen any time soon."

The reality is that we have to make small changes now or big changes later. The big changes later will be much more inconvenient. But because we are avoiding the small inconvenience now, we are forcing our kids to deal with the big ones later.