“Law of Small Numbers Redux” – Mark Blessington & Karl Hellman
In their famous article that contributed a great deal to an eventual Nobel Prize in Economics, Amos Tversky and Daniel Kahneman presented their discovery that people, including noted experts in statistics and probability theory, often exaggerate the validity of conclusions derived from small sample sizes. [i]
The article, Belief in the Law of Small Numbers, poked fun at the scientific community. The authors knew their audience was well versed in a statistical principle called the law of large numbers. Their research indicated that scientists were strongly inclined to violate this law; they regularly imputed inappropriately high levels of statistical significance to studies using small samples.
The recent presidential election forecast fiasco is a repeat of the phenomenon highlighted by Tversky and Kahneman. All major news media incorrectly predicted that Hillary Clinton would win the election. The prevalence of the error suggests that a systematic bias is common among all pollsters, pundits and reporters. And it seems that the Tversky and Kahneman’s work can lead us to the root problem.
It is extremely difficult to predict a presidential election. The popular vote must be accurately predicted for 50 states plus the District of Columbia.[ii] Each of the 51 forecasts has a probability of accuracy. If each probability is 99%, then the compound probability for these 51 independent elections is 60% (.99^51). If the average forecast accuracy drops to 90% per state, the compound probability is 0% (.95^51), or a complete random walk! Is it reasonable to claim an average accuracy of 99% per state? Certainly, state-level presidential election surveys are not that accurate.
The lowest probability for Clinton winning on election morning, as published by a major election forecaster, was 71% (see Learning from Forecast Disasters). The mathematical implication of this probability is that many state elections were treated as absolute certainties. For example, if we claim absolute certainty on how 44 states will vote, then we can focus on seven swing states. If these seven states have an average forecast accuracy of 95%, then the compound probability of accuracy is 70% (.95^7). Is it reasonable to claim absolute certainty for 44 state-level presidential elections? Certainly, there will be an occasional surprise among 44 non-swing states.
So how is the recent election forecast disaster relevant to marketing managers? Here are three important lessons:
- Remember: predicting the success of new products or new product features is very difficult. New product failure rates are very high, so treat high success predictions with extreme skepticism.
- It is very likely that market research firms use similar methods as presidential election pollsters. Indeed, some pollsters are also well-known market researchers (e.g., Gallup). Ask your market researchers to enumerate all assumptions and inputs used to calculate probability of success or confidence intervals. Be suspicious and look for overly optimistic or generous assumptions.
- Perhaps the best lesson to learn from the recent forecasting disaster is to adopt a long-standing and well-regarded rule from P&G: Proposed new products must be at least twice as good as the market leader to warrant investment. This principle effectively negates highly nuanced or subtly biased applications of statistics.
Mark Blessington is a Partner at Consentric Marketing.
Dr. Karl Hellman is Managing Director of Consentric Marketing.
NOTES
[i] Amos Tversky and Daniel Kahneman, Belief in the Law of Small Numbers, Psychological Bulletin, 1971, 2, 105-10.
[ii] To make matters even more complicated, Maine and Nebraska use the congressional district method, so an additional five districts must be forecasted. In total, 56 state and district predictions must be made to create a single presidential vote prediction. For this analysis, we treat the number of elections to forecast as 51.