Beware The Exit Polls
The numbers aren't real. Not yet, anyway.
Nate Cohn, “Exit Polls: Why They So Often Mislead” (Nov. 4, 2014)
Did you think John Kerry was poised to win the presidency? That Scott Walker was close to losing the 2012 governor’s recall election in Wisconsin? Do you believe that the black share of the electorate in North Carolina dropped to 23 percent in 2012, from 26 percent in 2004?
If you said “yes” to any of those things, you probably have too much faith in exit polls.
Don’t get me wrong: Exit polls are an exciting piece of Election Day information. They’re just not perfect. The problem with them is that most analysts and readers treat them as if they’re infallible.
The problems begin early on election evening, when the first waves of exit polls are invariably leaked and invariably show a surprising result somewhere. You’re best off ignoring these early returns, which are unweighted — meaning the demographic mix of the respondents is not adjusted to match any expectations for the composition of the electorate. The first waves also don’t even include all of the exit poll interviews.
The problems continue with the final waves, which analysts pore over in the days after the election and treat as a definitive account of the composition of the electorate. Some foolish journalists might write entire posts that assume that the black share of the electorate was 15 percent in Ohio. In reality, the exit polls just aren’t precise enough to justify making distinctions between an electorate that’s 15 percent black and, say, 13 percent black.
The imperfections of the exit polls are not hard to show. Here are two quick examples, based on official voter turnout statistics:
The exit polls showed that voters over age 65 were 18 percent of the electorate in Iowa in 2008, but 26 percent in 2012. The official state turnout statistics instead show that the share increased to 23.6 percent, from 21.9 percent, over the same span.
In North Carolina, the exit polls showed that the black share of the electorate dropped to 23 percent in 2008, from 26 percent in 2004, and held steady at 23 percent in 2012. The state turnout statistics say the share rose from 18.6 percent in 2004 to 22.3 percent in 2008, and then to 23.1 percent in 2012.
How can the exit polls be off by so much? The biggest thing to remember is that they’re just polls! They’re usually based on a sample of a few dozen precincts or so in a state, sometimes not even including many more than 1,000 respondents. Like every other type of survey, they’re subject to a margin of error because of sampling and additional error resulting from various forms of response bias.
Increasingly, the exit polls aren’t even distinct from normal telephone polls, since they’re using telephone surveys to sample early and absentee voters, who represent more than a third of the electorate in most of this year’s core battleground states. Unlike traditional media polls, which are weighted to various census targets, the exit polls aren’t weighted until far later in the evening — when they get weighted to actual voting results.
This isn’t a knock on exit polls. They’re not designed to measure the results perfectly or measure the composition of the electorate. I find myself surprised by how just how accurate the exit poll figures can be, despite the obvious issues with the raw responses and the inability to weight to population targets.
Unfortunately, most analysts and reporters jump on the surprising, outlying, newsworthy findings. Often, those figures are the ones most likely to be wrong.
Matt Yglesias, “The problem with exit poll takes, explained” (Nov. 9, 2020)
Before the votes from the 2020 presidential election have been fully tabulated, pundits all across the land are venturing forth to offer analysis based on the demographic breakdowns provided in the exit polls.
If we’re looking to reiterate our own prior convictions, the data is serviceable. But if we want actual information about voting behavior, exit polls are not very useful — and the early exit polls that haven’t even been weighted to the final vote count are even worse.
To start with, as we all keep learning over and over again, it is difficult to conduct accurate surveys of voters’ opinion. Nothing about conducting an accurate exit poll is any easier than conducting an accurate pre-election poll. If anything, it’s harder. Some people vote on Election Day, and other people vote early through different means, so especially this year, with a huge shift in how people vote, the modern “exit” polls need to combine multiple waves of survey data.
Last but by no means least, one advantage exit pollsters have is that they know the final outcome of the election. So you never publish an exit poll saying Biden won the election by 8 points when he really won by 4. Instead, the exits are weighted to match the actual outcome.
But we don’t yet know what the actual outcome is, so the “results” of the exit polls keep changing as more votes are counted. CNN’s exit poll write-up has a disclaimer that reads “exit poll data for 2020 will continue to update and will automatically reflect in the charts below,” meaning that not only is the data you’re working with inaccurate, the very page you are citing will say something different in the future.
Unfortunately, facts derived from exit polls — like “53 percent of white women voted for Trump” — tend to become hardened conventional wisdom pretty quickly. More accurate information only becomes available later (a Pew analysis based on administrative data about who actually voted suggests the share was around 47 percent), at which point the truth comes out, but nobody cares anymore. It’s tempting to seize on this kind of information to illustrate broader points we want to make about social or cultural trends, but realistically it’s just not possible to answer certain questions with the level of precision implied by exit poll results.
Back in 2016, the myth of majority support for Trump among white women was based on a similar error.
As Molly Ball wrote in a little-read 2018 debunking, Pew’s analysis suggests that the exit polls were simply undercounting the number of white voters. By doing that, the polls ended up exaggerating the share of the white vote that Trump won. In this particular case the difference between 53 percent and 47 percent is not large, but it happens to cross the psychologically significant 50 percent threshold. (Keep in mind that all polls are subject to error, and Pew’s methodology likely gets closer to the correct share but is also not exact.)
The truth took a long time to come out, however, because it simply takes a long time for comprehensive voter file or census data to be available.
Robert Griffin, “Don’t trust the exit polls. This explains why.” (Nov. 10, 2020)
After months of campaigning and days of counting ballots, the major news media outlets have projected that former vice president Joe Biden has defeated President Trump.
However, Biden didn’t do as well as public polls projected. Some groups unexpectedly appeared to have shifted toward Trump, such as Latinos. In rushing to understand what happened, some have relied on the National Exit Poll (NEP) conducted by Edison Research to form narratives about what happened and why. But that data source appears to have significant flaws — which could skew those narratives’ conclusions.
Specifically, the NEP’s estimates of who voted — what percentage of voters fall into any given demographic group — appear to be wrong. This kind of problem has plagued the NEP in the past and, apparently, it is an issue again this year. If the NEP’s estimate of who voted is incorrect, then the vote margins — the percent by which each demographic group voted for each candidate — could be incorrect. That can distort our picture of how different groups voted. And if the numbers for how different groups voted Trump/Biden are wrong, they shouldn’t be used to try to explain what happened in this election.
This year the NEP suggests that just 65 percent of voters were White and 34 percent were White without a four-year college degree. These estimates are dramatically smaller than what other research has found during prior elections. For example, the States of Change project — a series of reports that I co-authored with Ruy Teixeira and Bill Frey — found that 74 percent of voters were White in 2016, and 44 percent were White non-college. These estimates are identical to the Pew Research Center’s analysis of a large voter-validated survey.
What can that tell us about this year’s voters? We know that the relative turnout of different groups does not typically change dramatically between elections. If the relative turnout rates of different groups stayed the same, long-term demographic trends would lead us to expect 72 percent of 2020 voters to be White and 41 percent to be Whites without a college degree. For the NEP’s estimates of 65 and 34 percent to be correct, the relative turnout rates of different racial groups would have to have changed substantially and in ways that are not believable.
I post those old columns to remind you that, while it’s natural for political junkies such as ourselves to want to parse the outcomes of elections in real time, what we’re really doing is cherry picking unrepresentative data to confirm our priors.
Stories like these
- Anatomy of a close election: How Americans voted in 2022 vs. 2018
- Exit polls show voters divided by Biden, Trump and abortion
- Nearly half of voters say they are worse off financially, more than double what it was 2 years ago
- More than two thirds of voters say Democracy in US is threatened, preliminary exit poll results show
- Majority of voters say abortion should be legal in all or most cases, preliminary exit poll results show
- Voters trust the Republican Party over Democrats to handle inflation, preliminary exit poll results show
- Broad economic discontent among voters, preliminary exit poll results say
may or may not be telling us anything useful.
And it’s absolutely too early to draw any big conclusions about youth voters, Hispanic voters, and any other subgroup. The samples just haven’t been weighted yet.
We know this. We’re reminded of it every election cycle. And, yet, we fall for it every time.
The nature of punditry is that we’re going to offer opinions on what happened and use whatever information is available. There’s an appetite for answers, after all. But all we can really do at this point is to offer speculation. And, 9 times out of 10, that speculation is going to focus on what we thought going into the election.