Making Predictions

Here’s a good David Brooks column on forecasting:

Tetlock and company gathered 3,000 participants. Some got put into teams with training, some got put into teams without. Some worked alone. Some worked in prediction markets. Some did probabilistic thinking and some did more narrative thinking. The teams with training that engaged in probabilistic thinking performed best. The training involved learning some of the lessons included in Daniel Kahneman’s great work, “Thinking, Fast and Slow.” For example, they were taught to alternate between taking the inside view and the outside view.

Suppose you’re asked to predict whether the government of Egypt will fall. You can try to learn everything you can about Egypt. That’s the inside view. Or you can ask about the category. Of all Middle Eastern authoritarian governments, what percentage fall in a given year? That outside view is essential.

…In the second year of the tournament, Tetlock and collaborators skimmed off the top 2 percent of forecasters across experimental conditions, identifying 60 top performers and randomly assigning them into five teams of 12 each. These “super forecasters” also delivered a far-above-average performance in Year 2. Apparently, forecasting skill cannot only be taught, it can be replicated.

There’s a lot there and do read it in conjunction with the supremely impressive *Thinking Fast and Slow*. In that book, Kahneman tells a story about how he and a team of academics were going to write a new psychology curriculum.

After doing a bit of preparatory work on the textbook, Kahneman decided to poll the team members’ forecasts for how long it might take to finish the thing. Two years or so was the central estimate with a range between 1.5 and 2.5.

Then I had another idea. I turned to Seymour [who was surveyed in the original group that said two years, remember -DW], our curriculum expert, and asked whether he could think of other teams similar to our that had developed a curriculum from scratch. Seymour said he he could think of quite a few. I then asked whether he knew the history of these teams in some detail, and it turned out that he was familiar with several. I asked him to think of these teams when they had made as much progress as we had. How long, from that point, did it take them to finish their textbook projects?

He fell silent. When he finally spoke, it seemed to me that he was blushing, embarrassed by his own answer: “you know, I never realized this before, but in fact not all the teams at a stage comparable to ours ever did complete their task. A substantial fraction of the teams ended up failing to finish the job.”

…My anxiety risking, I asked how large he estimated that fraction was: “about 40%”, he answered… “Those who finished, I asked, “How long did it take them?” “I cannot think of any group that finished in less than seven years”.

“When you compare our skills and resources to those of other groups, how good are we?” “We’re below average”, he said, “but not by much.”

Forecasting is subject to a hurricane of cognitive bias.

Much of it comes down, I think, to the fact that the benefit of a good forecast, being right, is normally not realized for a while. The short term cost of unpopular or awkward or ‘stupid’ forecasts, on the other hand, is real and immediate and painful.

Humans have high discount rates. Unless the long term payoff is awesome, forecasting will always be about something other than being right.

Leave a Reply