I always wonder how much of an effect incentives have on these models in the real world. I recognize the theory is legit - if my marginal income tax rate is at 90%, I am not going to do a lot to attract more business. But how much elasticity is there throughout the range? If my income tax rate goes from 20% to 30%, is it going to affect me at all? Especially since withholding is such an amorphous process. I am not sure most people really know what their marginal rate is at the point when they are making work v play decisions.
The incentive to save effect (paradox of thrift) idea I think is more about salience. Right now, everyone is in panic mode. But give us some time to chill out, and I bet most people go right back to overconsumption and running up the Visa bill. I just don't see our culture learning its lesson. Our nucleus accumbans always seems to win out. The chocolate cookies just smell too good to pass up.
My musings about human behavior and how we can design the world around us to better accommodate real human needs.
Tuesday, December 15, 2009
Some great examples here of decision making biases in choosing health care options. It happens to the patients, doctors, health care providers, and more. Setting up the right incentives could solve a lot of our cost containment problems (a la Nudge or Freakonomics). But this also suggests that education will not solve the problem.
It is possible that health care panels could evaluate the empirical data and choose the coverage that makes the most sense for the most people, but this would require two tough calls:
1. How to keep politics out. This means no Congressional wrangling to compel choices for their special interests (i.e. Big Pharma), moral preferences (i.e. abortion), or constituents (i.e. skin cancer in Miami). But it also means no junk science either, which is hard because we are always working within confidence intervals.
2. To allow people to pay for uncovered procedures as much or as little as they want without making it an administrative nightmare. You can't do what has been proposed in the abortion debate (insurance riders) because until you know you need the procedure, no one would pay extra for such specific coverage. And then, it's too late. I kind of like the tiered solution that is being used now for a lot of prescription drug coverage. $10 copay for generics, $20 copay for branded drugs where there is no generic option, full price for branded drugs when there is a generic option. Or something to that effect.
It is possible that health care panels could evaluate the empirical data and choose the coverage that makes the most sense for the most people, but this would require two tough calls:
1. How to keep politics out. This means no Congressional wrangling to compel choices for their special interests (i.e. Big Pharma), moral preferences (i.e. abortion), or constituents (i.e. skin cancer in Miami). But it also means no junk science either, which is hard because we are always working within confidence intervals.
2. To allow people to pay for uncovered procedures as much or as little as they want without making it an administrative nightmare. You can't do what has been proposed in the abortion debate (insurance riders) because until you know you need the procedure, no one would pay extra for such specific coverage. And then, it's too late. I kind of like the tiered solution that is being used now for a lot of prescription drug coverage. $10 copay for generics, $20 copay for branded drugs where there is no generic option, full price for branded drugs when there is a generic option. Or something to that effect.
Tuesday, December 08, 2009
Human Reliability versus Ease of Use
I just read the panel description for a panel from this year’s Human Factors and Ergonomics Society Conference. It was on the relationship between human reliability and usability. There were some interesting perspectives discussed. The panelists all see a mistaken perception among many practitioners that there is a disconnect.
Both groups would prefer a design that maximizes both safety and ease of use, but we all know that we often encounter tradeoffs between them. Usability practitioners want to make things easy as the primary goal. This means making things so simple that they become automatic and require little or no conscious attention.
Reliability practitioners want to interject more consciousness to prevent, or at least reduce, skill-based errors. This is important because skill-based errors are often the most pervasive when we are dealing with experienced workers or domain experts. We get so used to putting our jobs on cruise control that we are susceptible to errors that are more due to a lack of focused attention than to any error in judgment.
This leads to a layered design model that makes sure both objectives are considered. You start out considering them separately but with good communication so that you don’t develop totally different design approaches. And as the design gets closer to completion (i.e. higher fidelity), the two objectives get more fully integrated.
This is very important in domains like nuclear power plant design, air traffic control, and the military. But it is also important in situations like my previous post (e-commerce web sites).
Both groups would prefer a design that maximizes both safety and ease of use, but we all know that we often encounter tradeoffs between them. Usability practitioners want to make things easy as the primary goal. This means making things so simple that they become automatic and require little or no conscious attention.
Reliability practitioners want to interject more consciousness to prevent, or at least reduce, skill-based errors. This is important because skill-based errors are often the most pervasive when we are dealing with experienced workers or domain experts. We get so used to putting our jobs on cruise control that we are susceptible to errors that are more due to a lack of focused attention than to any error in judgment.
This leads to a layered design model that makes sure both objectives are considered. You start out considering them separately but with good communication so that you don’t develop totally different design approaches. And as the design gets closer to completion (i.e. higher fidelity), the two objectives get more fully integrated.
This is very important in domains like nuclear power plant design, air traffic control, and the military. But it is also important in situations like my previous post (e-commerce web sites).
Ease of Use versus Security
I read an article today ($ to read in full) that highlights the importance of context in design. They tested users of an online auction and the tough issue of security. Security is enhanced if you go through a rigorous registration process, but that time and effort often makes the system less attractive. Two key findings of this study are:
- For users expecting to be using the system for a long time, they preferred the longer, time consuming process that added security.
- For users that had a higher perception of the security risk, they preferred the longer, time consuming process that added security.
Of course, you are probably thinking to yourself, “duh.” But there are implications that seem to be missing from the design approach of many sites that I use.
- For sites that expect users to be long-term focused, or can give them that focus during their marketing pitch, they should be willing to create a more security-focused registration process even if it makes the process longer.
- If a site recognizes that it needs a more security-intensive and therefore longer registration process, they should start by explaining (briefly) to the user why this is important. We take for granted that people know about the risks of identity theft, zombies and worms, phishing, etc. But the number of people who fall victim to these scams suggests that the general public may not have a clue.
Subscribe to:
Posts (Atom)