Political Polling in the Trump Era

The professionals are striving to keep up with changing voter habits.

538 senior elections analyst Geoffrey Skelley is back from the seminal conference for professional pollsters and reports that “Polling isn’t broken, but pollsters still face Trump-era challenges.” His essay is long and worth reading in full but I will excerpt some key takeaways.

Sampling: Meet people where they are

One of the conference’s consistent refrains was to “meet people where they are” when trying to reach potential respondents — that is, by using different modes of communication to contact different groups of people, based on their personal preferences. “There’s no way anymore to get a representative sample of the U.S. population or the voters from a single mode,” McPhee observed. “There are too many technologies out there. There are too many beliefs and preferences and attitudes and feelings about responding in general and responding by different modes.” Ensuring no group is underrepresented because of low response rates is critical for pollsters to gather a representative sample of the population they’re interested in — such as all voters in a given state — particularly if potentially underrepresented groups hold notably different views from those who are overrepresented.

McPhee told me the industry has moved toward agreement that multi-mode approaches are the best way to get a more representative sample. She stressed that it’s not that one mode is better, but rather a combination is “better than the sum of its parts.” For instance, one SSRS survey experiment saw improved response rates for state-level surveys that recruited respondents by various means, including postcard or SMS text message, and gave respondents six potential access points to respond: URL, QR code (directing them to the survey), text, email, a phone number for respondents to call (inbound dialing) and SSRS reaching out to them by phone (outbound dialing).

Texting in particular has become a common means to engage respondents across all demographic and partisan groups, often as part of a mixed-mode approach. Kevin Collins is the co-founder of Survey 160, a firm that focuses on SMS text-based surveys in its work with Democratic pollsters and nonpartisan organizations, largely via text-to-web sampling (text messages that link to a survey on a web browser) or live interviews over text. “We do text message surveys, but really what we believe in is mixed-mode surveys,” Collins told me. “Texting offers a very important additive benefit and cost savings over phones.”

[…]

Weighting: The known unknown of who will vote

Beyond gathering as representative a sample as possible, election polling presents another specific challenge: trying to gauge what the electorate will look like in a given election cycle. This raises questions about how to appropriately weight sample data — that is, how to adjust survey datasets to account for differences between the sample population and the overall population — to reflect who is most likely to actually vote in November.

In its post-2020 report, AAPOR cited nonresponse bias as one of the potential reasons for the cycle’s larger polling error, possibly because Democrats were systematically more likely to respond to pollsters than Republicans, and because the Republicans who did answer may have differed in important ways from the Republicans who didn’t.

[…]

But even the most accurate weighting by past votes can’t answer the ever-present question of which voters will turn out this year. How to handle the many voters who cycle in and out of the electorate is no minor consideration: As Nate Cohn at the New York Times recently noted, historically, roughly one-quarter of presidential election voters didn’t vote in the last presidential contest. That churn results in part from younger voters entering the electorate and older ones leaving it, but also reflects the participation of less engaged voters who bounce in and out of the electorate from year to year. “Good pollsters have to be thoughtful about how they account for those people,” McPhee said, as weighting by recalled vote isn’t an available option with new voters.

“Asking who is a likely voter is the wrong question,” Collins said. “It’s easy to know who the likely voters are — they’re the people who regularly vote. But the challenge is identifying which unlikely voters will end up voting.” And he stressed that this question becomes much more difficult if likely and unlikely voters have divergent vote preferences — as some national surveys have suggested is the case this year, finding that registered voters who didn’t vote in 2020 are slightly more Trump-leaning in 2024 surveys than those who did vote in 2020. That makes a pollster’s estimates of just who will show up on Election Day even more important in determining the outcomes of their polls.

Leaving aside those polls run by hack organizations to spread a particular ideology, those who do survey research have every incentive to get it right. Polling professionals are constantly adjusting their methodologies to changing technologies and habits.

While I have strong confidence that the polls accurately reflect the opinions of the American public on President Biden, former President Trump, Congress, the Supreme Court, and various longstanding policy issues, the same is not true for election outcomes. It’s always been very hard to know which fence-sitters will show up to vote in a given election. And it’s gotten much harder in the Trump era.

FILED UNDER: Public Opinion Polls, US Politics, , , ,
James Joyner
About James Joyner
James Joyner is Professor of Security Studies at Marine Corps University's Command and Staff College. He's a former Army officer and Desert Storm veteran. Views expressed here are his own. Follow James on Twitter @DrJJoyner.

Comments

  1. Kylopod says:

    And it’s gotten much harder in the Trump era.

    Is that really true, though? Contrary to popular belief, the polls for 2016 (both national and statewide) were overall slightly more accurate than those in 2012.

    1
  2. James Joyner says:

    @Kylopod: I think there are two competing—and perhaps slightly offsetting—things unique to the Trump era. First, he’s uniquely polarizing. There hasn’t been a major party candidate in my lifetime, perhaps ever, that so mobilized the opposition to vote. (Ironically, his 2016 opponent was pretty close.) So, we’ve had super-high turnout. Second, though, I don’t know that there’s ever been a major-party candidate so viscerally disliked by such a large part of his own base. Most will still hold their noses and vote for him but some unknowable number will just stay home.

    3
  3. just nutha says:

    I’m not sure there’s a to correlation between degree of difficulty and accuracy. They can be discreet qualities, and it’s possible for a task to be both more difficult to perform and more accurately performed. In this case, I’m inclined to see the product as limited in value, and I’m not much of a consumer of it because of that limited value. I’ll take the article’s assertion at face value.

  4. MarkedMan says:

    @James Joyner: I also feel that we have become so Party-cized (Politicized completely based on party affiliation), that polls just aren’t that relevant any more, at least the broad “approve/disapprove” and “Biden/Trump” types of polls. You get a poll with a MOE of 4%, but the starting point is that 40% are locked in for each candidate. On top of that, no presidential candidate is going to win by more than 55-45%, and that would be a huge outlier. Trump will either win while loosing the popular vote, 49-51% or lose 48-52%. What use is a 4% MOE poll on that?

    The truth is that turnout and/or a small number of fairly to severely disengaged voters will decide this election. I don’t know how polls can deal with that.

    2
  5. Kylopod says:

    @MarkedMan:

    On top of that, no presidential candidate is going to win by more than 55-45%, and that would be a huge outlier.

    This is an important and often overlooked point. Decades ago when presidential elections typically were won by wide margins, nobody much noticed if a poll was off by a few points. I just checked the Wikipedia article on the history of Gallup’s ratings, and it showed that their final poll for 1988 underestimated Dukakis by about 5 points. But Bush was so far ahead, the error went virtually unnoticed. In contrast, when their final poll for 2012 showed Romney ahead by one point, and then Obama went on to win by 4 points, they were raked over the coals by the media and had to issue a big mea culpa and promise to look into what went wrong.

    As fallacious as it is, most people judge polls based on the binary question of whether they showed the winning candidate winning, and treat the margins as irrelevant. If one pollster were to show the Dem leading by 2 points, and another showed the Repub leading by 10 points, and then the Repub ended up winning by 1 point, the common reaction would be to treat the first poll as the worse of the two, when in fact, to anyone who understands how polling works, the first poll was fairly accurate and the second was absolute garbage. But that’s not how the media would cover it. They’d say the first poll got the race “wrong,” the second one got it “right.” It’s a deeply flawed understanding of polls, and it makes today’s polls seem worse than they actually are simply by virtue of the fact that the elections are typically so close, making small errors more likely to lead to the “wrong” candidate being forecast as the winner.

    2
  6. MarkedMan says:

    @Kylopod: 100% agree and would also add that part of the problem is that we still think of polls as useful as a predictor of who will win a given election. While that may have been true in the past, I don’t think it is true today. Nixon beat McGovern by 61-38%. Perhaps in 2032 or 2040 such a result would be possible and polls such as we have today would tell us something meaningful about that race, but they simply don’t have meaning for today’s highly partisan, closely split electorate. Polls of a different type are useful for helping campaigns decide which issues or voter blocks to target, but are worse than worthless as a tool for predicting 2024’s Presidential winner. I would feel the same way whether it is Biden or Trump that is up by a few points.

  7. Just nutha ignint cracker says:

    @Kylopod:

    If one pollster were to show the Dem leading by 2 points, and another showed the Repub leading by 10 points, and then the Repub ended up winning by 1 point, the common reaction would be to treat the first poll as the worse of the two, when in fact, to anyone who understands how polling works, the first poll was fairly accurate and the second was absolute garbage.

    Which kind of returns us to why I view polling as a product of marginal utility. On the other hand, the sermon at the church I visited last night noted that people need to have activities that they pursue apart from the utility of what those activities produce. He linked them to the idea of labors of love or hobbies. As hobbies go, there are lots of more spiritually destructive ones than following and evaluating political polling; at least, I would guess there are.

  8. gVOR10 says:

    @MarkedMan: 538 has tried to get around the problem by reporting probabilities. But it doesn’t seem to me a lot of people understand probabilities. IIRC they called something around 70-30 Hillary. Which didn’t save them from a lot of, “You said Hillary would win.”

  9. JohnSF says:

    Rather off the direct topic, but of some relevance perhaps
    (and also because it pleases me):
    Current UK poll of polls after a week and half of campaigning (ignoring regional/national parties for simplicity):
    Labour 45%
    Conservatives 24%
    Reform (aka UKIP mk3) 11%
    LibDem 9%
    Greens 5%

    The interesting point from the trend lines, and confirmed by some other data (and above all the Tory campaign tactics to date) is that the Conservative poll shifts are now NOT “mirroring” Labour, but Reform. They are bleeding support on their right.

    Farage today announced he will stand in Clacton, which may give Reform another point or two.

    The sole demographic where the Conservatives are ahead are the over 70 age group, and that barely.
    And that’s where Reform are really biting hard.

    But, as the Tories try harder with (mostly daft) policy proposals to woo the pensioner vote, it seems to be hurting them in the affluent, but skewed younger, “liberal conservative” constituencies in the “Blue Belt” around London.

    The LibDems seem hopeful that this, combined with anti-Tory tactical voting, could net them a clutch of seats in Outer London/Home Counties/South West where Labour tend to come third.

    Most poll analysis seems to think they are on the money, at any given time, and mapping to actual election outcomes tends to confirm.

    But what I really like, is that current MRP analysis indicates that my constituency (Bromsgrove) is currently projected as a Conservative/Labour marginal, having been held by the Conservatives since 1983.
    🙂

  10. Lounsbury says:

    @Kylopod:
    @MarkedMan:

    we still think of polls as useful as a predictor of who will win a given election

    The statement shows that you in fact don’t understand the subject.
    Or simply
    @gVOR10:

    But it doesn’t seem to me a lot of people understand probabilities.

    is not merely a seems, it is fundamental that there is almost no numeracy in statistics and probability maths, although admittedly probability maths are just fundamentally not common sensical to the primate brain.

    Polling is showing the same thing it always has – as Kylopod has usefully indicated – unfortunatley broadly people are innumerate in probabilities and simply do not understand it – or rather badly misunderstand it.

    If one is paying proper attention to error bars, the polls are showing a range of probable outcome – which people persistantly misunderstand as Win Lose binary “future prediction” – a rather magical way of understanding polls but for better or worse a deeply rooted cognitive error.

    If one however is paying attention and forcing oneself to think in probability analysis, baccurate polls (that is well designed ones with reasonable eror ranges – this being Kylopods point that the better poll in his +10 / -1 scenario would normally be the one not in relative to the Winner but to the distribution) will be showing a range of potential outcomes (and showing some things are not going to happen, and show potential weaknessess – this of course only relative to proper statistically constructed polls).

    @JohnSF:

    Farage today announced he will stand in Clacton, which may give Reform another point or two.

    Seeing the toad’s face in the FT more… god.

  11. DMA says:

    @gVOR10: this assigning of probabilities makes it unclear what the point is. The practice of assigning probabilities makes it a prediction site without the courage of its convictions.

  12. DrDaveT says:

    @gVOR10:

    But it doesn’t seem to me a lot of people understand probabilities.

    Speaking as someone who has taught probability, and statistics, and stochastic processes, and has been a cost estimator (and taught cost estimation), I have to say that this is an extreme understatement. There is almost no subject you can name where people in general have worse intuitions or worse comprehension.

    I once had a student object to the statement “there’s a 50% chance of rain tomorrow” on the basis that it contained no information — that 50/50 odds means the same thing as “I have no idea whatsoever”.

    4
  13. DrDaveT says:

    @DMA:

    The practice of assigning probabilities makes it a prediction site without the courage of its convictions.

    Um, no. Seriously.

    Nothing is certain. Stating a predicted outcome, without any probabilities, is equivalent to claiming that the probability is 100% — and is wrong. Stating an outcome as being “more likely” without naming a probability is an actual case of lacking the courage of your convictions (or at least belief in your own analysis).

    2
  14. Ken_L says:

    No amount of pseudo-scientific jargon like “mixed-mode surveys” can disguise the reality that the public opinion industry has come to rely on self-selecting samples of people who are not representative of the population, because the overwhelming majority of the population declines to participate in public opinion surveys. Which is hardly surprising, given the torrent of polls that we are bombarded with every day on every subject under the sun. Those who do agree to participate – who in many cases volunteer to do so – are self-evidently different in an important way from the rest of us, with consequences for the reliability and validity of the findings which are unknown but must exist.

    The Economist /YouGov Poll seems to be regarded as one of the “better” polls, but from the perspective of an academic researcher, it’s a joke. Some questions are so vague one has no idea how respondents would have interpreted them. Hard data already exists in some other instances that casts serious doubt on the validity of the findings; for example, I recall one survey in which the proportion of respondents who claimed they had been vaccinated against Covid, or had contracted the virus at least once, was so wildly at variance with the official data that many respondents must have either not understood the questions, or deliberately lied, or answered without caring whether the answers were true or false.*

    Simple “if the election were to be held tomorrow who would you vote for?” polls still have some merit, subject to all the reservations described in the post. I no longer take any notice of polls which purport to reveal what “Americans think” about more complex issues. They seem to be conducted mainly to give their sponsors material on which to base stories in their publications.

    *It’s also possible, though unlikely, that the official data is wildly inaccurate, but I’m not aware that any reputable critics have made this claim. Even if it were so, the Economist/YouGov people should surely have noted the discrepancies rather than discuss the findings as if they were representative of the general population.

    2
  15. DrDaveT says:

    @Ken_L:

    Those who do agree to participate – who in many cases volunteer to do so – are self-evidently different in an important way from the rest of us

    While I agree completely with this, I will admit that I do not know which subpopulation is the weird one. Are those who decline to participate in polls the majority, or a small minority? Are they unusual in their political preferences, or typical? Our friend Lounsbury would no doubt assure us that we Uni lefty bolshy wankers are not representative — are we the ones who predominantly choose not to respond? Or is it the embarrassed-to-admit-what-they-really-support MAGAts? Who knows?

    2