Ranking The Pundits
Gary Wyckoff and his students at Hamilton College analyzed the predictions of a number of op-ed columnists and TV pundits and found that most of them were bad and that conservatives tended to be wore than liberals at making predictions.
Jim Romenesko (“Claim: Krugman is top prognosticator; Cal Thomas is the worst”):
A Hamilton College class and their public policy professor analyzed the predicts of 26 pundits — including Sunday morning TV talkers — and used a scale of 1 to 5 to rate their accuracy. After Paul Krugman, the most accurate pundits were Maureen Dowd, former Pennsylvania Governor Ed Rendell, U.S. Senator Chuck Schumer (D-NY), and former House Speaker Nancy Pelosi. “The Bad” list includes Thomas Friedman, Clarence Page, and Bob Herbert.
Press Release (“Pundits as Accurate as Coin Toss According to Study“):
Op-ed columnists and TV’s talking heads build followings by making bold, confident predictions about politics and the economy. But rarely are their predictions analyzed for accuracy.
Now, a class at Hamilton College led by public policy professor P. Gary Wyckoff has analyzed the predictions of 26 prognosticators between September 2007 and December 2008. Their findings? Anyone can make as accurate a prediction as most of them if just by flipping a coin.
Their research paper, “Are Talking Heads Blowing Hot Air? An Analysis of the Accuracy of Forecasts in the Political Media” will be presented via webcast on Monday, May 2, at 4:15 p.m., at www.hamilton.edu/pundit. The paper will also be available at that address at that time. Questions during the presentation can be posed via Twitter using #hcpundit.
The Hamilton students sampled the predictions of 26 individuals who wrote columns in major print media and who appeared on the three major Sunday news shows – Face the Nation, Meet the Press, and This Week – and evaluated the accuracy of 472 predictions made during the 16-month period. They used a scale of 1 to 5 (1 being “will not happen, 5 being “will absolutely happen”) to rate the accuracy of each, and then divided them into three categories: The Good, The Bad, and The Ugly.
The students found that only nine of the prognosticators they studied could predict more accurately than a coin flip. Two were significantly less accurate, and the remaining 14 were not statistically any better or worse than a coin flip.
The top prognosticators – led by New York Times columnist Paul Krugman – scored above five points and were labeled “Good,” while those scoring between zero and five were “Bad.” Anyone scoring less than zero (which was possible because prognosticators lost points for inaccurate predictions) were put into “The Ugly” category. Syndicated columnist Cal Thomas came up short and scored the lowest of the 26.
Even when the students eliminated political predictions and looked only at predictions for the economy and social issues, they found that liberals still do better than conservatives at prediction. After Krugman, the most accurate pundits were Maureen Dowd of The New York Times, former Pennsylvania Governor Ed Rendell, U.S. Senator Chuck Schumer (D-NY), and former House Speaker Nancy Pelosi – all Democrats and/or liberals. Also landing in the “Good” category, however, were conservative columnists Kathleen Parker and David Brooks, along with Bush Administration Treasury Secretary Hank Paulson. Left-leaning columnist Eugene Robinson of The Washington Post rounded out the “good” list.
Those scoring lowest – “The Ugly” – with negative tallies were conservative columnist Cal Thomas; U.S. Senator Lindsey Graham (R-SC); U.S. Senator Carl Levin (D-MI); U.S. Senator Joe Lieberman, a McCain supporter and Democrat-turned-Independent from Connecticut; Sam Donaldson of ABC; and conservative columnist George Will.
Landing between the two extremes – “The Bad” – were Howard Wolfson, communications director for Hillary Clinton’s 2008 campaign; former Arkansas Governor Mike Huckabee, a hopeful in the 2008 Republican primary; former House Speaker Newt Gingrich, a Republican; Sen. John Kerry of Massachusetts, the Democratic nominee for president in 2004; liberal columnist Bob Herbert of The New York Times; Andrea Mitchell of NBC; New York Times columnist Tom Friedman; the late David Broder, former columnist for The Washington Post; Chicago Tribune columnist Clarence Page; New York Times columnist Nicholas Kristof; and Hillary Clinton.
The group also found a link between conditional predictions and accuracy, that is, a prediction that was conditional (“If A, then B”) was less likely to be accurate. Finally, those prognosticators with a law degree were more likely to be wrong.
The full report, “Are Talking Heads Blowing Hot Air? An Analysis of the Accuracy of Forecasts in the Political Media,” is available in PDF form. The methodology strikes me as reasonable enough.
But, aside from the “small n” problem–that is, none of the pundits have enough predictions scored to make for a statistically interesting sampling–there are a host of issues.
Punditry is not about predictions.
Prediction is really an odd way to judge political pundits since guessing the future is a mug’s game. Television talking heads shows, especially, love to do it because it makes for entertaining dialog. The correct answer–Who the hell knows?–is not particularly interesting, so pundits who wish to be invited back hazard a guess.
The thing that separates a good pundit from a hack is the ability to analyze facts in an illuminating framework and a willingness to adjust his view as new information comes in.
The time frame is too small and biases the outcomes.
The period September 2007 and December 2008 coincides with the final stage of the Bush Administration, the 2008 presidential election, and the global financial crisis.
The last year of the Bush Administration was off-the-charts bad for the Republican Party. We would expect Democratic pundits are going to make more accurate predictions about bad outcomes for Republicans and good outcomes for Democrats. Most of these people have been blathering on the public record for years. How did they do in, say, 1994?
During the primary phase of the 2008 campaign, Democrats had two plausible candidates and the Republicans had a half dozen. The sheer odds make it more likely that the Democrats are more accurate. Additionally, the early Republican frontrunner, Rudy Giuliani, performed horrendously. Mike Huckabee, a virtual unknown when the period began, won the Iowa Caucuses and made a surprisingly strong showing.
Once the race settled down to an Obama-McCain contest, it became a binary choice prediction-wise. Obama won. Presumably, all the Democrats correctly predicted that. Until late in the game, most Republican pundits were naturally going to spin ways that McCain could pull it off.
The financial meltdown took place while a Republican was in office and was more catastrophic than most expected. We would expect Democratic pundits to predict worse outcomes under a Republican president.
Pundits make different types of predictions.
Unfortunately, the report does not show the raw data; it just presents raw numbers: How many predictions, how many right, and how many wrong. That means we can’t look at the actual predictions. Some things are obviously harder to predict than others. Some pundits are more cautious about predicting things that are hard to predict, sticking instead to relative certainties and near events rather than far. Some pundits are only asked on to talk about things within the scope of their expertise, while others are expected to offer an opinion on just about everything.
Pundits have different goals.
As noted earlier, there are thoughtful analysts who are biased but fair. That is, they have an ideological point of view, likes and dislikes, and a rooting interest in the outcome of elections, policy debates, and other things they’re talking about and yet acknowledge the strong points of the opposite argument. Others are hacks, either because they simply lack the analytical tools to think outside their narrow worldview or, more often, they’ve calculated that extreme partisanship and churlishness is the key to longevity.
But many of the “pundits” rated in the study aren’t pundits at all; they’re politicians or political operatives. These people all meet my definition of “hack,” but that’s only because an unfair framework is being applied to them. That is, they’re not in the business of presenting accurate analysis but rather of advancing a specific agenda, whether it’s their own re-election or the benefit of their party or cause. They’re propagandists by design, not analysts.
Indeed, one of the main reasons I’ve long since stopped watching the talking heads shows is because this group tends to dominate the guest list. It’s simply not interesting to hear what the designated spokesman for Position X thinks about Position X. But most of the shows in the post-Crossfire era have operated in that framework, even having hosts that play characters. The classic example of the latter was late Robert Novak, a terrific reporter and columnist who was an extreme hack as a television talking head, playing the character Bob Novak, Prince of Darkness.
An exception to this format is ABC’s This Week, which started during the George Stephanopoulos era to assemble a rotating panel of usually interesting roundtable discussants. George Will remains a regular and Paul Krugman is on most weeks as well. They’re paired with two or three guests, most of whom are quite good. Even the partisan operatives, notably Donna Brazille and Matthew Dowd, come close enough to taking off their activist hats to make enjoyable commentators. But I don’t have any idea how good they are as prognosticators; I’m just interesting in their ability to frame the discussion in interesting ways and incorporate known facts into their argument in a fair manner.