Beware The Exit Polls

The numbers aren't real. Not yet, anyway.

Nate Cohn, “Exit Polls: Why They So Often Mislead” (Nov. 4, 2014)

Did you think John Kerry was poised to win the presidency? That Scott Walker was close to losing the 2012 governor’s recall election in Wisconsin? Do you believe that the black share of the electorate in North Carolina dropped to 23 percent in 2012, from 26 percent in 2004?

If you said “yes” to any of those things, you probably have too much faith in exit polls.

Don’t get me wrong: Exit polls are an exciting piece of Election Day information. They’re just not perfect. The problem with them is that most analysts and readers treat them as if they’re infallible.

The problems begin early on election evening, when the first waves of exit polls are invariably leaked and invariably show a surprising result somewhere. You’re best off ignoring these early returns, which are unweighted — meaning the demographic mix of the respondents is not adjusted to match any expectations for the composition of the electorate. The first waves also don’t even include all of the exit poll interviews.

The problems continue with the final waves, which analysts pore over in the days after the election and treat as a definitive account of the composition of the electorate. Some foolish journalists might write entire posts that assume that the black share of the electorate was 15 percent in Ohio. In reality, the exit polls just aren’t precise enough to justify making distinctions between an electorate that’s 15 percent black and, say, 13 percent black.

The imperfections of the exit polls are not hard to show. Here are two quick examples, based on official voter turnout statistics:

The exit polls showed that voters over age 65 were 18 percent of the electorate in Iowa in 2008, but 26 percent in 2012. The official state turnout statistics instead show that the share increased to 23.6 percent, from 21.9 percent, over the same span.

In North Carolina, the exit polls showed that the black share of the electorate dropped to 23 percent in 2008, from 26 percent in 2004, and held steady at 23 percent in 2012. The state turnout statistics say the share rose from 18.6 percent in 2004 to 22.3 percent in 2008, and then to 23.1 percent in 2012.


How can the exit polls be off by so much? The biggest thing to remember is that they’re just polls! They’re usually based on a sample of a few dozen precincts or so in a state, sometimes not even including many more than 1,000 respondents. Like every other type of survey, they’re subject to a margin of error because of sampling and additional error resulting from various forms of response bias.

Increasingly, the exit polls aren’t even distinct from normal telephone polls, since they’re using telephone surveys to sample early and absentee voters, who represent more than a third of the electorate in most of this year’s core battleground states. Unlike traditional media polls, which are weighted to various census targets, the exit polls aren’t weighted until far later in the evening — when they get weighted to actual voting results.

This isn’t a knock on exit polls. They’re not designed to measure the results perfectly or measure the composition of the electorate. I find myself surprised by how just how accurate the exit poll figures can be, despite the obvious issues with the raw responses and the inability to weight to population targets.

Unfortunately, most analysts and reporters jump on the surprising, outlying, newsworthy findings. Often, those figures are the ones most likely to be wrong.

Matt Yglesias, “The problem with exit poll takes, explained” (Nov. 9, 2020)

Before the votes from the 2020 presidential election have been fully tabulated, pundits all across the land are venturing forth to offer analysis based on the demographic breakdowns provided in the exit polls.

If we’re looking to reiterate our own prior convictions, the data is serviceable. But if we want actual information about voting behavior, exit polls are not very useful — and the early exit polls that haven’t even been weighted to the final vote count are even worse.

To start with, as we all keep learning over and over again, it is difficult to conduct accurate surveys of voters’ opinion. Nothing about conducting an accurate exit poll is any easier than conducting an accurate pre-election poll. If anything, it’s harder. Some people vote on Election Day, and other people vote early through different means, so especially this year, with a huge shift in how people vote, the modern “exit” polls need to combine multiple waves of survey data.

Last but by no means least, one advantage exit pollsters have is that they know the final outcome of the election. So you never publish an exit poll saying Biden won the election by 8 points when he really won by 4. Instead, the exits are weighted to match the actual outcome.

But we don’t yet know what the actual outcome is, so the “results” of the exit polls keep changing as more votes are counted. CNN’s exit poll write-up has a disclaimer that reads “exit poll data for 2020 will continue to update and will automatically reflect in the charts below,” meaning that not only is the data you’re working with inaccurate, the very page you are citing will say something different in the future.

Unfortunately, facts derived from exit polls — like “53 percent of white women voted for Trump” — tend to become hardened conventional wisdom pretty quickly. More accurate information only becomes available later (a Pew analysis based on administrative data about who actually voted suggests the share was around 47 percent), at which point the truth comes out, but nobody cares anymore. It’s tempting to seize on this kind of information to illustrate broader points we want to make about social or cultural trends, but realistically it’s just not possible to answer certain questions with the level of precision implied by exit poll results.


Back in 2016, the myth of majority support for Trump among white women was based on a similar error.

As Molly Ball wrote in a little-read 2018 debunking, Pew’s analysis suggests that the exit polls were simply undercounting the number of white voters. By doing that, the polls ended up exaggerating the share of the white vote that Trump won. In this particular case the difference between 53 percent and 47 percent is not large, but it happens to cross the psychologically significant 50 percent threshold. (Keep in mind that all polls are subject to error, and Pew’s methodology likely gets closer to the correct share but is also not exact.)

The truth took a long time to come out, however, because it simply takes a long time for comprehensive voter file or census data to be available.

Robert Griffin, “Don’t trust the exit polls. This explains why.” (Nov. 10, 2020)

After months of campaigning and days of counting ballots, the major news media outlets have projected that former vice president Joe Biden has defeated President Trump.

However, Biden didn’t do as well as public polls projected. Some groups unexpectedly appeared to have shifted toward Trump, such as Latinos. In rushing to understand what happened, some have relied on the National Exit Poll (NEP) conducted by Edison Research to form narratives about what happened and why. But that data source appears to have significant flaws — which could skew those narratives’ conclusions.

Specifically, the NEP’s estimates of who voted — what percentage of voters fall into any given demographic group — appear to be wrong. This kind of problem has plagued the NEP in the past and, apparently, it is an issue again this year. If the NEP’s estimate of who voted is incorrect, then the vote margins — the percent by which each demographic group voted for each candidate — could be incorrect. That can distort our picture of how different groups voted. And if the numbers for how different groups voted Trump/Biden are wrong, they shouldn’t be used to try to explain what happened in this election.

This year the NEP suggests that just 65 percent of voters were White and 34 percent were White without a four-year college degree. These estimates are dramatically smaller than what other research has found during prior elections. For example, the States of Change project — a series of reports that I co-authored with Ruy Teixeira and Bill Frey — found that 74 percent of voters were White in 2016, and 44 percent were White non-college. These estimates are identical to the Pew Research Center’s analysis of a large voter-validated survey.

What can that tell us about this year’s voters? We know that the relative turnout of different groups does not typically change dramatically between elections. If the relative turnout rates of different groups stayed the same, long-term demographic trends would lead us to expect 72 percent of 2020 voters to be White and 41 percent to be Whites without a college degree. For the NEP’s estimates of 65 and 34 percent to be correct, the relative turnout rates of different racial groups would have to have changed substantially and in ways that are not believable.

I post those old columns to remind you that, while it’s natural for political junkies such as ourselves to want to parse the outcomes of elections in real time, what we’re really doing is cherry picking unrepresentative data to confirm our priors.

Stories like these

may or may not be telling us anything useful.

And it’s absolutely too early to draw any big conclusions about youth voters, Hispanic voters, and any other subgroup. The samples just haven’t been weighted yet.

We know this. We’re reminded of it every election cycle. And, yet, we fall for it every time.

The nature of punditry is that we’re going to offer opinions on what happened and use whatever information is available. There’s an appetite for answers, after all. But all we can really do at this point is to offer speculation. And, 9 times out of 10, that speculation is going to focus on what we thought going into the election.

FILED UNDER: Public Opinion Polls, US Politics, , , , , , , , , , , , , , , , , , , , ,
James Joyner
About James Joyner
James Joyner is Professor and Department Head of Security Studies at Marine Corps University's Command and Staff College. He's a former Army officer and Desert Storm veteran. Views expressed here are his own. Follow James on Twitter @DrJJoyner.


  1. Sleeping Dog says:

    At my polling location, a layout change had those heading into vote, pass by the exit poll desk. When I went in, the desk was unmanned and continued to be when I exited. Past elections, those conducting the exit poll sat passively as voters self selected as to whether they would stop and be interviewed. Under the best conditions, hardly a way of garnering any accuracy.

  2. DK says:

    Prof. Joyner is right. Too early to draw slam dunk conclusions about micro-demographics. But maybe not too early to draw conclusions about how badly Republican messaging and pundit class predictions flopped.

    EJ Dionne: The GOP offered rage and Trump. The country said no. (Washington Post)

    Republicans misread the mood of the country, and many political analysts went along. Young voters backed the Democrats, and enough voted this year to make a big difference. Americans are quite capable of being angry about the state of the economy without letting their unhappiness push them into the arms of extremists.

    …The red wave so many anticipated in this year’s midterms proved to be a chimera. The reliable polls did say it would be close. Many prognosticators preferred precooked conclusions.

    The United States remains a deeply divided country, but a substantial majority (58 percent in the national exit poll) dislikes the former president. This majority saved one Democrat after another from defeat. President Biden had one of the most successful midterm elections of any chief executive in history, not because he enjoyed high approval ratings (he doesn’t) but because nearly half of the voters said he was not a factor in their choice. They backed Democrats by a 3-2 ratio to oppose the far right, Trump and the election deniers — and to support abortion rights and gun control.

    The consensus seemed to be that the GOP had run a very disciplined campaign focused on inflation and crime…

    It didn’t work, partly because Republicans offered nothing in the way of solutions to the problems they were bemoaning…

    This pointed to another, often ignored, aspect of what was happening on the ground. Democrats won much criticism, often from within their own ranks, for not having a coherent national “message” on the economy. But in contest after contest, Democratic candidates pointed to what they had done and would do about the economy, highlighting their achievements on infrastructure, investments in new technologies and efforts to fight climate change, as well as attempts to reel in prescription drug costs.

    The weakness of the Republican showing was brought home by some of their pickups, particularly in Florida and New York, where the results came as much from the redrawing of district lines as any change in voter sentiment.

    It’s probably too much to hope that Democratic success will tamp down warfare between the party’s progressive and centrist wings. But both sides would do well to acknowledge a core fact of political life: Democrats win only when they can unite the left and the center. Democrats needed the turnout and the 88 percent vote share they won from the slightly more than a quarter of the electorate that described itself as liberal. But they also needed the 54 percent they won among the one-third of voters who said they were moderate.

    For all the good news for Democrats, the fact remains that the outcome of this election is up in the air. Many House seats and the decisive Senate seats remain undecided…

    But even if it does gain a share of power, the GOP will have to reckon with how its fealty to Trump and trafficking with extremists is lethal, and how voters demand more from their politicians than rage…you wonder whether Republicans have any capacity for introspection left in them.

  3. Scott says:

    Polling in general has its problems these days. I think all the polling organizations know it but don’t know what to do about it. Makes you wonder if the Biden approval numbers are inaccurate also. Everybody assumes they are accurate and pundits pontificate based on those numbers but really they only thing useful maybe the trends, not the actual number.

  4. JKB says:

    What has been amusing the last few election cycles is how desperate those who depend on election day for their profit have been to keep the illusion of “election day” going. As pointed out, exit polls suffer due to early and “not in person” voting. The news media and pundits try to act like “election night” coverage matters, when vote counts continue for a week or more. But hey, that’s where they make their money or get their clicks. Debate organizers are the most amusing not scheduling debates until after thousands have voted.

    But, the media keep bringing up Detroit in stories even though it is a dying shell and no longer the rich, populous city that mattered as it was in the ’50 and ’60s.

  5. just nutha says:

    @JKB: Interesting idea. If only the person turning to “dying shell” Detroit was someone other than Donald J. Trump. Hmmmm…

  6. Rick DeMent says:


    But, the media keep bringing up Detroit in stories even though it is a dying shell and no longer the rich, populous city that mattered as it was in the ’50 and ’60s.

    Sure, but it still has more people in it then the state of Wyoming.