I heard an interesting hypothesis yesterday. If you had access to precise data on everything that has ever happened before, you would be able to predict the future. The short term future would be almost perfect prediction. Because of stochastic uncertainty, the distant future would be harder, but still a lot better than it is now. So who needs omniscience if we had good big data and good analytics behind it.
This has a lot of truth behind it, but I would like to restate it by switching two critical words. I think you could "forecast" the future if you had access to precise "information" on everything that has ever happened.
The difference is subtle but important.
Data: the high temperature in Boston on June 3, 1926 was 72 degrees Fahrenheit.
Info: the high temperature in Boston on June 3, 1926 was ranged from 69 in the north of the city to 73 in the west of the city. This was 3 degrees above the average for the month and 4 degrees above the average for the other Junes in the 1920s.
Prediction: based on all of the past temperature data and accurate weather models, it will be 75 degrees on June 3, 2012.
Forecast: based on all of the past temperature data and accurate weather models, there is a 92% chance that it will be 75 degrees on June 3, 2012, with a 99% confidence interval from 72 to 79 degrees.
Why does this matter? Well, I see pundits in all walks of life (particularly in politics but also in economics, weather, space science and other more quantitative domains) who make their claims as precise predictions when they could not possibly, even with perfect data, be so confident. The public would be so much better served if we could acknowledge our uncertainty.
The problem is that the general public does not understand concepts like confidence intervals. Not their fault - public schools don't teach them. So we need either to change public school curricula (which I doubt - just look at the debates over intelligent design) or to develop a user friendly language to communicate uncertainty.
The greater challenge is that it would have to be short enough to fit in a tweet somehow because when people share the information with each other, that is a common path it will take.
Saturday, April 07, 2012
I am not sure how thrilled I am to be basing this post on an insight I got from Sam Harris, but it is one of the best analogies for free will I have seen. This is based on a recent post on his blog, although taken in a different direction (as I am sure you knew I would). When we watch movies, we have the illusion that the image on the screen is continuous. Deep down, most of us know that it is really 60 static frames per second. But concentrating on that would ruin the movie. Putting the knowledge aside to improve life experience is OK.
We can think of free will the same way. Perhaps neuroscience can demonstrate that every decision we make is a combination of past experience hard coded into our neurons, culture that is hard coded the same way, and genetics that started out hard coded - with just a bit of stochastic randomness thrown in. But behavioral science demonstrates that concentrating on this leads us to less ethical and less dedicated behavior. So human experience is improved by putting the knowledge about free will aside and acting as if we had the full control of our will that our perceptual experience tells us.
And that’s OK.