25  Better forecasting

Below I examine one set of solutions to improve forecasts, which emerged from a tournament conducted by the United State’s Intelligence Advanced Research Projects Activity (IARPA). The tournament pitted teams of forecasters against each other in predicting political and economic events. The result of the tournament was a decisive victory for a team called the Good Judgment Project (in fact, so decisive that during the first 2 years that IARPA dropped the other teams for later years of the tournament).

There were at least three unique features of the Good Judgment Project approach (Tetlock and Gardner (2016)):

The aggregation of forecasts is a use of statistical groups.

Extremising the forecast meant shifting the forecast toward the limits. If the forecast is a 70% probability that an event will occur, bump up to 85%. If 30%, cut it to 15%.

The idea behind extremising is that no-one in the group has access to all of the dispersed information. If everyone had all the available information, this would tend to raise their confidence, which would result in a more extreme forecast. Since we can’t give everyone all the information, extremising is an attempt to simulate what would happen if you did. To get the benefits of this extremising, however, requires diversity. If everyone holds the same information there is no sharing of information to be simulated.

25.1 Training superforecasters

Another novel element of the forecasting approach was training forecasters to overcome their biases (Chang et al. (2016)). There is limited evidence that directly training people to overcome their biases can improve decision making. This training to improve forecasting is a rare exception.

Training was conducted through years 1 to 4 of the project, and comprised less than an hour’s training each year. Forecasters were trained in either probabilistic reasoning or scenario development.

The scenario development training involved instruction in a process to think through the problem, such as developing coherent and logical probabilities (e.g. all options must add to 100%), exploring assumptions, identifying drivers, and considering best and worst case scenarios.

The probabilistic reasoning training covered the use of reference classes and base rates, how to use statistical groups, use of statistical and mathematical models, and the need to be self-aware of typical cognitive biases (all in the content of this course).

The training improved forecaster accuracy in all four years. The improvement in forecast accuracy was around 10% for both types of training (as measured by Brier scores).

The usefulness of skills contained in the training was also evident in the practices of the best individual forecasters. The best forecasters tended to take the outside view by asking the base rate for the event, and only then taking the “inside view” by looking for information idiosyncratic to that particular question. The best forecasters also suffered less from scope insensitivity. When asked whether an event will occur in the next 6 or 12 months, regular forecasters would predict approximately the same probability of the event occurring. Conversely, better forecasters tend to spot the difference in timeframes and adjust their probabilities so.

If you want more on this topic, listen to this podcast where Robert Wiblin (n.d.) interviews Philip Tetlock on the 80,000 hours podcast.