Rasmussen Poll Overestimated Republican Vote
Rasmussen polls were biased toward Republicans by 3 to 4 points. Rigged results? Or screening error?
Nate Silver‘s analysis confirmed what was already widely thought:
Rasmussen polls quite consistently turned out to overstate the standing of Republicans tonight. Of the roughly 100 polls released by Rasmussen or its subsidiary Pulse Opinion Research in the final 21 days of the campaign, roughly 70 to 75 percent overestimated the performance of Republican candidates, and on average they were biased against Democrats by 3 to 4 points.
Oliver Willis, however, jumps to the wrong conclusion in “Rasmussen’s Rigged Polls For The 2010 Elections.”
There’s no evidence Rasumussen’s polls are “rigged.” And it would be demonstrably stupid for Rasmussen to “rig” his polls, which are intended for public consumption.
I’ve got no dog in this fight. Indeed, my wife is COO of a competing polling firm (see Disclosures). But it’s far more likely that, rather than intentionally delivering biased results and thus hurting his reputation, he’s simply applying a bad likely voter screen that consistently over-estimates Republican turnout. Just like Newsweek and others consistently over-sampled Democrats.
When it comes to very close races, polling is more art than science.
For example, in Nevada, ALL the polls had Angle up. The margin was narrow across the board, with Rasmussen/POR showing a 3 point lead and Mason-Dixon showing a 4 point lead. Reid won by 5.6 points!
In Washington, where it looks as though incumbent Democrat Patty Murray will eek out a close victory over Dino Rossi, Rasmussen actually over-polling her margin at +2. He was actually on the high end, with PPP giving Rossi a +2 margin and McClatchy/Marist giving Murray a +1.
Part of this can be explained by the vagaries of polling: You can never be quite sure that your response sample will match up with those who show up to vote. Or that they’ll vote the way they say they’ll vote. That’s especially true in a race like Nevada’s, where few voters actually wanted either of the two candidates and were having to hold their nose to vote for their least bad choice.
In “normal” elections — and it’s been a while since we’ve had one of those — likely voter screening was comparatively easy. People who voted last election are likely to vote in this election and those who didn’t vote last time are unlikely to vote this time. And it makes sense to weight the responses based on past performance, which typically means bumping up the Republican, white, and older demos.
But there’s a lot of uncertainty right now in the screen. We’ve had angry, energized electorates in three straight cycles. It’s just been different groups who have been angry and energized.
UPDATE: With a lot of caveats, Mark Blumenthal answers the question, “How Did The Polls Do?”
How did the final polls measuring the national “generic” vote preference compare to the ultimate Republican margin that will likely fall somewhere between six and seven percentage points? As the table below shows, the telephone surveys conducted by the Pew Research Center, Ipsos/Reuters, NBC/Wall Street Journal, CBS/New York Times and the two Internet-based surveys by YouGov/Polimetrix and Zogby Interactive all produced margins that are very close to the likely final result.
So, yes, Rasmussen was among the most biased, along with Fox. Both have reputations as Republican-leaning. But — surprise! — the venerable Gallup poll was off even more. In the same direction!
I should note that my wife’s firm, Public Opinion Strategies, makes up half of the NBC/WSJ poll, got the likely final result exactly right.
UPDATE 2: Actually, another — not mutually exclusive — explanation exists. It turns out that Rasmussen’s results, which are robocalled, don’t include cell phones. Neither do the Fox numbers. That alone could account for a 3-4 point Republican skew.
Yes, NBC/WSJ includes cell calls in their methodology. Oddly, so does Gallup — which really makes their numbers bad.
NOTE: This was originally published as a “Quick Picks” snippet, which I almost immediately decided to expand into a longer analysis.