The Making of a Top Forecaster: Techniques to Boost Accuracy

Author
Vanessa Pineda
Published
Nov 12, 2020 05:08PM UTC

Foretell is CSET's crowd forecasting pilot project focused on technology and security policy. It connects historical and forecast data on near-term events with the big-picture questions that are most relevant to policymakers.


More than 500 forecasters have received accuracy scores on 20 questions since Foretell’s launch. The Foretell team was curious to get insight into what the top 1-2% of forecasters (as of November 11, 2020) were doing to help their performance. Of the forecasters we talked to1, they all shared two main techniques that research supports for boosting accuracy: 1) creating an initial forecast based on a question's broader properties (e.g., base rate or historical trends) before modifying that forecast with case-specific properties, and 2) updating those forecasts frequently and gradually to take into account new information. 

Looking to the Past for Answers about the Future

When we asked the top forecasters how they first approached a question, most responded similarly to forecaster @Odin, a former academic with a PhD in political science, who has been participating in geopolitical forecasting since 2014: “I start by thinking about the past. For some questions, there is a past history in the form of numerical data or a time series, as with financial markets. For questions that are about discrete events, I try to think of similar cases that can be compared to the event in question.”

Molly Hickman (@mollygh), a Computer Science Master’s candidate at Virginia Tech volunteering with the project as a Foretell Ambassador, said: “Typically, I start by thinking, what are similar things that have happened before? This is what [social scientists] Daniel Kahneman and Amos Tversky called ‘reference classes.’ [On questions where I’m asked to forecast a trend], I just look at the historical data [provided] and look for patterns there.”

Nuño Sempere (@Loki), a programmer, independent researcher, and hobbyist forecaster in Austria and currently in the leaderboard’s top spot, said he too starts with a historical base rate (i.e., prior probabilities of similar cases) and then adjusts. When he is not able to collect sufficient historical context, he is a fan of using the Ignorance prior or the Laplace prior.2,3 

Creating a starting baseline forecast (or “prior”) by using reference classes or historical data for a particular question is part of an approach called the “outside view” by Kahneman and Tversky. A common forecasting bias is to give too much weight to the particulars of the case before you and too little weight to the more-general properties of the situation. Starting with the outside view minimizes that bias.

After creating a baseline forecast, you can then make adjustments by looking at case-specific information, or the “inside view”. Sempere (@Loki) said he adjusts his baseline probability by trying to figure out if he has any insights into the particular case that make it different from the more-general case from which he created his baseline, “like details about [a country’s] politics or things I've found out about the [forecasting] question after doing some research.” 

Forecaster @dz624, a Public Administration PhD candidate at a prominent university in China, likes to modify his baseline probability in response to others’ viewpoints to minimize bias: “The sources I use come from mainstream news, reputable blogs, think tanks, industry-related organizations, government entities, and government websites… My main concern [is] the credibility of the sources.”

Changing Your Mind Pays Off

After generating a baseline forecast (outside view) and modifying it in light of case-specific information (inside view), the forecasters we talked to submitted the initial forecasted probability. But that was not the end of the process. They went back and regularly made updates (note that Foretell allows you to set reminder alerts after submitting a forecast). 

“I proactively re-evaluate all of my existing forecasts systematically every week or so, rather than wait for news events to prompt an update,” said forecaster @Odin.

A recent article by Pavel Atanasov in Scientific American, “How the Best Forecasters Predict Events Like Election Outcomes,” highlights research that supports the forecasters’ position on frequent updating, showing that the most accurate forecasters tend to change their mind over time. They revise their forecasts to reflect new information, and do so in small increments, comparing the significance of the new information with the information they had before. 

“I update forecasts about once a week. I make either small or slightly bigger changes. I have a lot of <5% changes, but also a couple >10%,” said Sempere (@Loki). 

Sempere thinks of a 5-10% change as a “bigger” change, and this tracks with best-practice forecasting. Atanasov’s article points out that too big of a change in probability -- such as going from 40% to 80% -- can be a sign of recency bias, the tendency to overemphasize new information. A related risk is the availability bias, the tendency to place too much importance on things that easily come to mind. For example, it’s common to overestimate the likelihood of a plane crash after reading about one in the news. The best forecasters resist overreacting to recent or more memorable information. 

It’s All About Practice 

Can anyone become a more accurate forecaster? Yes - that’s the good news. Foretell’s top forecasters are not domain experts on the questions they participated in (even if they were, experts are also easily susceptible to bias). While @Odin and Sempere (@Loki) have participated in similar forecasting tournaments as hobbyist forecasters, @dz624 and Hickman (@mollygh) had never actively exercised probabilistic forecasting until now. 

Research from The Good Judgment Project, the winning team from the U.S. Intelligence Community-sponsored ACE forecasting tournament that ran from 2011-2015, shows how practice and training can greatly improve one’s forecasting ability. Over the course of hundreds of geopolitical questions, they identified the top 1-2% of forecasters as “superforecasters,” a group that consistently outperformed intelligence analysts with access to classified information. The training and methods used with the superforecasters were popularized in Tetlock’s book with Dan Gardner, Superforecasting: The Art and Science of Prediction

The Good Judgment Project’s research also showed the value of working in teams. Teams who debated their forecasts (usually virtually) performed better than individuals working alone. Perhaps that’s yet another technique benefitting top forecaster Sempere (@Loki), who meets regularly with his two teammates, @yagudin (also in the top 1%) and @elifland, to discuss and evaluate questions from different angles.

“Each week, we pick two questions on Foretell to independently research. It's rare that we each arrive at the same [probability] estimates, so we then have some lively discussion for around one hour on Sundays,” he said.

Foretell is still a long way from being able to identify superforecasters, nor is that the focus of the project. Yet Foretell’s best performers (so far) are already employing some of the same techniques common among superforecasters: going from a more-general forecast (outside view) to one that takes case-specific information into account (inside view) and regularly updating their forecasts in response to new information. These techniques can help reduce bias and make everyone more accurate forecasters -- on Foretell and in everyday decision making.



Footnotes

1 Names of forecasters are omitted if they chose to remain anonymous in keeping with Foretell’s privacy policy.

Ignorance prior: if you don't know anything at all, your probabilities should be evenly distributed among all the options. 

Laplace prior: if all you know is that something hasn't happened in the last n periods, the probability of it happening in the next period is 1/(n+1). For example, if something hasn't happened in the last four weeks, the probability of it happening in the fifth week can be forecasted as 1/5 or 20%.

To stay updated on what we’re doing, follow us on Twitter @CSETForetell.

Vanessa Pineda, Director of Professional Services, Cultivate Labs

Author: Vanessa Pineda

Director of Professional Services, Cultivate Labs
Files
Tip: Mention someone by typing @username