Wednesday, January 21, 2009
Confirmation bias happens when we have an opinion about something as big as whether the death penalty works or as small as which team to pick in tonight's basketball game. When there is mixed evidence, there is a slew of research that shows the evidence that supports our first impression has a much bigger impact on our final decision then the evidence that contradicts it.
But what I learned from these studies is about how this happens. I always thought it was because we ignored the contradictory evidence and focused on the confirming evidence. But it turns out to be just the opposite.
In a study of football betting, they found that when people win a bet they don't think about it much at all. They just chalk it up to being smart about football. The subjects thought things like "Of course they won. I knew the quarterback would pull them through." And then they move on.
But when they lose, they think about it in much more detail to explain the error. They think things like "Well, they only lost because of that bad call in the second quarter. And because the receiver dropped that easy pass. Otherwise they would have won and I would have been right." So instead of counting it as a bad bet, they count it as a bet they "should have won." There are two possible outcomes of a bet. Either you are right, or you were right but had bad luck. Either way, the outcome supports whatever process you used to pick the winner.
The same thing happened in a study of people's views on the death penalty. They had subjects read two articles about the death penalty. One with evidence that supports it and one with evidence against. Both articles had some flaws, but an equal number in each one. What happened was that when people read the article they agreed with, they just skimmed it and added a mental check mark in their opinion that they were right. They didn't notice the flaws. But when they read the opposing article they scrutinized it very carefully to find flaws and of course found them. So they discounted that article. In the end, supporters and opposers of the death penalty ended up with stronger beliefs of their prior opinion after reading the same two studies.
So the basis of confirmation bias is not necessarily that we ignore contradictory evidence. Instead, we work very hard to prove it wrong. If you want to be a convincing person, the best thing to do is keep details to yourself and don't reveal any ammunition to discredit you. The adage that "It's better to be quiet and be thought a fool then to open your mouth and remove all doubt" seems to be supported by the evidence.
Monday, January 19, 2009
This is a great example of good human factors. We know that there are many decision making biases that are caused by our memories "using us." Two of these are salience bias and recency bias. When something is easily called to mind we greatly overestimate its prevalence. Things are easily called to mind when they are sensorily salient (strong sensory experience), semantically salient (had a large impact on us) or when they are recent. But our brains incorrectly assume that if it is easily brought to mind, it must be a frequent occurrence. There are many examples of these effects steering us wrong.
Another memory-related bias is the representativeness bias. When something looks like a good example of something, we assume it must be a likely case. I am reading the book "How we know what isn't so" that proves hot and cold shooting streaks in basketball are really just a figment of our imagination. The author cites a significant body of research in his proof. But when we see a player hit 3 or 4 in a row, we just "know" he is hot because this "looks" like a streak.
Deepak Chopra's solution is pretty good too. Its kind of long, but in essence he says "Be a witness to your thoughts, your moods, your reactions, your behaviors. They represent your
memories of the past, and by witnessing them in the present, you liberate yourself of the past. By observing your addictive behaviors, you observe your conditioning. And when you observe your conditioning, you are free of it, because you are not your conditioning; you are the observer of your conditioning."
From a human factors perspective the idea is to consider the memories that are telling you something is true or false, and evaluate whether they are really frequent or proof or if they are just salient and easily recalled. If we do this consciously, we can avoid many common errors.
But I am still disappointed that basketball streaks aren't real. Maybe I will choose to keep believing that one anyway. Who does it hurt?
We have learned a lot about human cognition since the 1960s that would have served MLK better in his approach. When people make decisions, big or small, their first impression becomes anchored and is tough to overturn even in the face of strong evidence. And if that impression is stated publicly the effect is even stronger. If the decision maker acts on the decision, it is stronger still.
So MLK should have thought of a way to get the incoming Birmingham administration to do something publicly, no matter how small, in support of his movement. It didn't even have to be directly relevant to equal rights. That could have come later. Instead, he forced their first act to be directly opposed and guaranteed that they would continue to oppose him.
Perhaps he was more interested in gaining national attention and preferred a public conflict. That is often what civil disobedience is designed for. But not for influencing the local pols, he did the exact opposite of what might have worked. Of course, MLK did not have the benefit of the past 40 years of cognition research.
Sunday, January 04, 2009
Researchers and designers often think in terms of 95% (or 99%) confidence intervals. That is what we use as a design criterion or p-value for accepting a hypothesis. Nocera talks about it in terms of what financial firms used to evaluate their value at risk (VaR). If an investment has a 95% confidence interval of going up $25 million or going down $25 million, they worked under the assumption that these were the boundaries. But what this interval means is that 2.5% of the time, $25 million is the LEAST you can lose. No one seems to have thought of that.
Also, this creates perverse incentives, which is not only an area where I do research, but also the main reason I suspect the financial crisis arose in the first place. Basically, the financial innovators were trying to maximize the 99% confidence interval of the VaR of the securities they were creating. So it didn't matter if the security had a 0.5% chance of losing a trillion dollars, it wasn't included in the analysis (or the calculation of their bonuses). It is easy for the employee to rationalize that this could never really occur (0.5% is soooo small). And if the company was using the 99% VaR CI, then they didn't seem to care either.
For the financial industry, the solution is not regulations that prevent financial innovation. Its to make sure that the incentives are aligned for the company and for the employees.