Joe Nocera has a great article in the NYTimes about risk and investing. Its really long, but there is one point I want to talk about because it is relevant to research and many other things as well. As a human factors professional, its an important thought.
Researchers and designers often think in terms of 95% (or 99%) confidence intervals. That is what we use as a design criterion or p-value for accepting a hypothesis. Nocera talks about it in terms of what financial firms used to evaluate their value at risk (VaR). If an investment has a 95% confidence interval of going up $25 million or going down $25 million, they worked under the assumption that these were the boundaries. But what this interval means is that 2.5% of the time, $25 million is the LEAST you can lose. No one seems to have thought of that.
Also, this creates perverse incentives, which is not only an area where I do research, but also the main reason I suspect the financial crisis arose in the first place. Basically, the financial innovators were trying to maximize the 99% confidence interval of the VaR of the securities they were creating. So it didn't matter if the security had a 0.5% chance of losing a trillion dollars, it wasn't included in the analysis (or the calculation of their bonuses). It is easy for the employee to rationalize that this could never really occur (0.5% is soooo small). And if the company was using the 99% VaR CI, then they didn't seem to care either.
For the financial industry, the solution is not regulations that prevent financial innovation. Its to make sure that the incentives are aligned for the company and for the employees.