Yesterday’s Washington Post had a great article on the DC metro accident. It highlights a very important limitation of automation that we have studied in the Human Factors field for decades, but few people probably recognize. And they quote three human factors specialists too, so it’s a great plug for us :-D.
The point is that people have a certain unconscious cost/benefit analysis that underlies what they pay attention to. So when you make automation better, people can rely on it more and pay attention to it less. But then when the automation finally does fail, it fails BIG because no one was watching. It’s actually the same thing that happened with the financial industry. We relied on the banks’ ability to judge risk, so we deregulated them at the government level and trusted them more at the consumer level. No one was watching, so when they finally did fail, they failed BIG.
So what changes is not the cost/benefit ratio, but rather the likelihoods. Instead of small costs at a high frequency, we have large costs at a low frequency. Is this better? I don’t know but it is more palatable to human nature. Just read Nassim Taleb’s great book to see why.
The conclusion that everyone should realize is that this is not something that we can fix with short term design changes (DC Metro) or regulations (banking industry). Because if we make the systems more reliable, we will increase the time between failures, but also increase the size of the failures when they do happen. I’m not sure this is better. A more effective solution is a fundamental change in the system, but of course this is much harder.
And the other thing we know about human nature is that we rarely take the harder path.