The Signal and the Noise

I recently finished Nate Silver‘s famous book. Some parts were more fun to read than others, but overall, it was worth the read. I was impressed by Nate’s apparently vast perspective, and he did a good job of pointing out how bad we are at predicting certain things (and explaining some of the bottlenecks).

24book-superjumbo

Based on the reading, here’s a brief list of stars that need to align in order to succeed at prediction:

Seek rich data sets. This makes intuitive sense: The more data you have, the more likely you are to discover the truth. If I want to estimate how a coin is weighted, I want to see as many coin tosses as possible. Lack of data is the main difficulty with problems like earthquake prediction. If the underlying truth is sufficiently complex, there will be too many parameters to fit to a small data set, and the inevitable overfitting will bring bogus predictions. On the other hand, lots of data allows for an extraordinary number of hypotheses, thereby unleashing a slew of false positives to the novice statistician. (With big data comes big responsibility?) It’s also healthy to be aware of the limitations of data. For example, weather is inherently chaotic, meaning slight errors in initial conditions are propagated and magnified to the point where weather predictions are almost worthless after a week or so. Weather data would need to be denoised substantially in order to allow for a 2-week forecast, and longer forecasts are likely impossible.

Is your competition a bunch of suckers? This seems to be a theme among Nate’s personal data projects. I first heard of Nate when he predicted Obama’s reelection with over 90 percent confidence on the morning of election day. In his book, he observes that his competition was composed of pundits, who apparently have a terrible track record for prediction. He also described his fascination with online poker during the Texas Holdem bubble. He raked in lots of money during this time thanks to a large influx of “fish” (poker suckers) who had money to lose to the better players. One prediction arena which is full of non-suckers is the stock market, and the fact that this makes prediction difficult is codified in the efficient market hypothesis.

Think probabilistically. Probability is a natural way to express your understanding of truth, and Bayes’ theorem provides a formula for updating your understanding after observing phenomena which are representative of the truth. For example, you might initially guess that a coin is weighted anywhere between 0 and 1 before observing successive tosses of the coin, and your observations will impact your impression of the coin’s weight. Over time, the range of likely possibilities for the coin’s weight will shrink considerably, and this range will be consistent with your observations. Nate describes Bayes’ theorem as a model for scientific progress — that successive observation allows us to reproduce and refine scientific theory. Interestingly, Bayes’ theorem can only guarantee that our beliefs will converge to truth if the truth has some nonzero probability in our initial prior, thereby suggesting that scientific progress is hindered by a not-so-open mind (such an open mind was perhaps necessary to even conceive of the current formulation of quantum mechanics). The Bayesian perspective also provides some insight into hypothesis testing. In particular, a hypothesis test should be considered the first of many observations necessary to converge on truth. Without this perspective, one might be too easily swayed by news reports like this.

Identify, quantify, and communicate your uncertainty. The underlying uncertainty in a prediction can have disastrous consequences if it goes unidentified. Before the housing bubble burst, one of the key assumptions underlying mortgage-backed securities was that people default on their mortgages independently, and unfortunately, this independence assumption made the security’s predicted failure rate exponentially smaller than the true failure rate. Many bad predictors suffer from overconfidence. Pundits will predict some political outcome with absolute certainty, and they might be rewarded when they guess right, but their failures are too often forgotten. These failures illustrate a lack of calibration in uncertainty. Apparently, economists also suffer from overconfidence, but to a lesser extent. When it comes to uncertainty calibration, one should attempt to emulate the National Weather Service — about 30 percent of the time, when they say there’s a 30-percent chance of rain, it rains. However, it’s also important to communicate your uncertainty. In 1997, the National Weather Service predicted 49 feet of flooding in Grand Forks, where levees were built to handle 51 feet of flood water. What they didn’t communicate was the 9-foot margin of error they were embarrassed of, and the 54 feet of water that arrived was catastrophic.

Practice prediction. Given the importance of uncertainty quantification and calibration, it’s obviously good to develop a track record of performance to draw feedback from. In order to gain credibility, it helps to make your predictions public. I liked reading Nate’s evaluation of different global warming predictions from over 20 years ago. Unfortunately, the feedback loop for these predictions is rather long, and so it’ll take time to converge on truth in this case.

Look for consensus. It’s often the case that a group of predictors performs better than a rogue predictor. In a qualitative sense, this explains the success of AdaBoost. This makes good psychological sense: Before making an important life decision, you seek out advice from a lot of family and friends. It also makes good sense from a Bayesian perspective: If someone is confident in the truth, then he should be able to convince you by his observations, or else you can place a bet to the apparent advantage of both parties. Aggregating predictions can have some drawbacks, however. For example, if the predictions aren’t made independently, then the aggregate prediction will exhibit overconfidence. Also, the best prediction is probably better than the aggregate, since it isn’t compromised by so many terrible predictions.

Keep incentives in check. Misaligned incentives frequently compromise predictions. Your local weatherman doesn’t want to risk raining on your picnic, so he’ll report a 100-percent chance of rain when the true chance is only 70 percent. A stock trader, even if he senses a looming bubble burst, doesn’t want to sell stock for fear that the market rises in the short term — once the bubble bursts, every stock trader fails simultaneously, and they won’t all be fired. The National Weather Service didn’t want people to lose confidence in their predictive abilities by disclosing the sizable margin of error in their 1997 Grand Forks flood prediction. Lastly, climatologists (being convinced of the urgency of global warming) are reluctant to disclose the amount of uncertainty present in their predictions — despite the importance of uncertainty quantification to prediction, the opposition views uncertainty as a fundamental weakness. If you’re aware of your incentives (and the incentives underlying other predictions), you’ll do a better job of identifying and removing bias from your predictions and uncertainty estimates.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s