Oct

29

There was a New York Times article last week headlined, “In ‘97, U.S. Panel Predicted a North Korea Collapse in 5 Years.”

From the NYT article: A team of government and outside experts convened by the Central Intelligence Agency concluded in 1997 that North Korea’s economy was deteriorating so rapidly that the government of Kim Jong-il was likely to collapse within five years, according to declassified documents made public on Thursday.

This forecasting case study makes for a good addendum to Phillip Tetlock’s “Expert Political Judgment” (excerpt from “New Yorker” review below). Tetlock discusses the need for putting beliefs in testable forms, the tendency for statistical models to outperform human judgment, and the bias that motivates black swans to be overlooked.

From the “New Yorker” review: The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge…

Tetlock is a psychologist-he teaches at Berkeley-and his conclusions are based on a long-term study that he began twenty years ago. He picked two hundred and eighty-four people who made their living “commenting or offering advice on political and economic trends,” and he started asking them to assess the probability that various things would or would not come to pass, both in the areas of the world in which they specialized and in areas about which they were not expert. Would there be a nonviolent end to apartheid in South Africa? … Would Canada disintegrate? (Many experts believed that it would, on the ground that Quebec would succeed in seceding.) And so on. By the end of the study, in 2003, the experts had made 82,361 forecasts. Tetlock also asked questions designed to determine how they reached their judgments, how they reacted when their predictions proved to be wrong, how they evaluated new information that did not support their views, and how they assessed the probability that rival theories and predictions were accurate…

Human beings who spend their lives studying the state of the world, in other words, are poorer forecasters than dart-throwing monkeys…

Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable … And the more famous the forecaster the more overblown the forecasts…

“Expert Political Judgment” is just one of more than a hundred studies that have pitted experts against statistical or actuarial formulas, and in almost all of those studies the people either do no better than the formulas or do worse…

Tetlock’s experts were also no different from the rest of us when it came to learning from their mistakes. Most people tend to dismiss new information that doesn’t fit with what they already believe. Tetlock found that his experts used a double standard: they were much tougher in assessing the validity of information that undercut their theory than they were in crediting information that supported it. The same deficiency leads liberals to read only The Nation and conservatives to read only National Review. We are not natural falsificationists: we would rather find more reasons for believing what we already believe than look for reasons that we might be wrong. In the terms of Karl Popper’s famous example, to verify our intuition that all swans are white we look for lots more white swans, when what we should really be looking for is one black swan …

[E]xperts routinely misremembered the degree of probability they had assigned to an event after it came to pass. They claimed to have predicted what happened with a higher degree of certainty than, according to the record, they really did. When this was pointed out to them, by Tetlock’s researchers, they sometimes became defensive.

And, like most of us, experts violate a fundamental rule of probabilities by tending to find scenarios with more variables more likely …

[Worse forecasters are] thinkers who ‘know one big thing,’ aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who ‘do not get it,’ and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. [Better forecasters are] thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ‘ad hocery’ that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.

Tetlock also has an unscientific point to make, which is that “we as a society would be better off if participants in policy debates stated their beliefs in testable forms”-that is, as probabilities-”monitored their forecasting performance, and honored their reputational bets.”>

In the macroeconomic sphere the corollary to Tetlock’s work is this 2002 paper by Yale economist Owen Lamont. Lamont wrote, “[Wall Street forecasts are] not necessarily designed to minimized squared forecast errors; rather, forecasts may be set to optimize profits or wages, credibility, shock value, marketability, political power (in the case of government forecasts), or more generally to minimize some loss function.”

Also, following Greenspan’s Fed Chairmanship, WSJ’s Greg Ip described Greenspan’s approach to policy-making in the context of Tetlock’s book.


Comments

WordPress database error: [Table './dailyspeculations_com_@002d_dailywordpress/wp_comments' is marked as crashed and last (automatic?) repair failed]
SELECT * FROM wp_comments WHERE comment_post_ID = '389' AND comment_approved = '1' ORDER BY comment_date

Name

Email

Website

Speak your mind

Archives

Resources & Links

Search