Rasmussen’s 2012 Polling Has Had A Republican Bias All Year
To add some more fuel to the “Skewed Polls” debate we have this observation about Rasmussen’s polling so far in the 2012 election cycle:
Enough presidential polling data is now available to analyze Rasmussen’s data. Here is the methodology. The database contains 119 Rasmussen state polls from Jan. 1, 2012 until yesterday.. For each poll, a check was made to see if at least one poll from a different nonpartisan pollster was in the data base within a week either way from the Rasmussen poll. For example, for Rasmussen’s poll of North Carolina on Oct. 2, a check was made for any other polls of North Carolina whose midpoint was between Sept. 25 and Oct 9. In this case, polls from PPP, ARG, SurveyUSA, and High Point University were found. For 82 polls, comparison polls within a week were found. For the other 37 Rasmussen polls, no other nonpartisan pollster surveyed the state within a week of Rasmussen’s poll, so those polls were not used in this analysis.
For each remaining poll, the Obama – Romney score was computed. The arithmetic mean of the other polls’ scores was then subtracted from the Rasmussen Obama – Romney value. Ideally, the result should be zero, but statistically that is very unlikely. A positive result means Rasmussen is overestimating Obama’s standing and a negative one means he is underestimating it. For example, for the North Carolina poll cited above Rasmussen said Obama was 4 points behind but the average of the other pollsters put Obama 0.2 behind, so Rasmussen gets a bias score of -3.8 here. Averaging all 82 polls, Rasmussen’s mean bias is -1.91 points, that is, Rasmussen appears to be making Obama look almost 2 points worse than the other pollsters.
Note that this does not necessarily mean Rasmussen is wrong and the others are right. It could be that Rasmussen is right and the others are painting too rosy a picture for Obama. There is no way to know now.
On Election Night 2010, Nate Silver observed that, based on the election results that had come in, it appeared that Rasmussen’s 2010 polling was biased against Democrats by three to four percentage points. In a follow-up post, noted this about Rasmussen’s performance that year:
The 105 polls released in Senate and gubernatorial races by Rasmussen Reports and its subsidiary, Pulse Opinion Research, missed the final margin between the candidates by 5.8 points, a considerably higher figure than that achieved by most other pollsters. Some 13 of its polls missed by 10 or more points, including one in the Hawaii Senate race that missed the final margin between the candidates by 40 points, the largest error ever recorded in a general election in FiveThirtyEight’s database, which includes all polls conducted since 1998.
Moreover, Rasmussen’s polls were quite biased, overestimating the standing of the Republican candidate by almost 4 points on average. In just 12 cases, Rasmussen’s polls overestimated the margin for the Democrat by 3 or more points. But it did so for the Republican candidate in 55 cases — that is, in more than half of the polls that it issued.
If one focused solely on the final poll issued by Rasmussen Reports or Pulse Opinion Research in each state — rather than including all polls within the three-week interval — it would not have made much difference. Their average error would be 5.7 points rather than 5.8, and their average bias 3.8 points rather than 3.9.
Nor did it make much difference whether the polls were branded as Rasmussen Reports surveys, or instead, were commissioned for Fox News by its subsidiary Pulse Opinion Research. (Both sets of surveys used an essentially identical methodology.) Polls branded as Rasmussen Reports missed by an average of 5.9 points and had a 3.9 point bias. The polls it commissioned on behalf of Fox News had a 5.1 point error, and a 3.6 point bias.
Rasmussen’s polls have come under heavy criticism throughout this election cycle, including from FiveThirtyEight. We have critiqued the firm for its cavalier attitude toward polling convention. Rasmussen, for instance, generally conducts all of its interviews during a single, 4-hour window; speaks with the first person it reaches on the phone rather than using a random selection process; does not call cellphones; does not call back respondents whom it misses initially; and uses a computer script rather than live interviewers to conduct its surveys. These are cost-saving measures which contribute to very low response rates and may lead to biased samples.
Rasmussen also weights their surveys based on preordained assumptions about the party identification of voters in each state, a relatively unusual practice that many polling firms consider dubious since party identification (unlike characteristics like age and gender) is often quite fluid.
Rasmussen’s polls — after a poor debut in 2000 in which they picked the wrong winner in 7 key states in that year’s Presidential race — nevertheless had performed quite strongly in in 2004 and 2006. And they were about average in 2008. But their polls were poor this year.
The question, of course, is why Rasmussen’s polls have shown this bias for two consecutive election cycles now, although it’s worth noting that they seem to be slightly less biased against Democrats than they were in 2010. The natural conclusion many on the left will draw is that Scott Rasmussen is deliberately cooking his books to favor the GOP but I think the answer is likely more mundane than that. Yes, the fact that Rasmussen weights by Party ID, something that almost no other pollster does, is going to have an influence on the numbers, but Nate Silver noted on his blog before his big move to The New York Times that Rasmussen’s polls seem to show the same bias even when they’re not using the Likely Voter model. So, what’s going on?
As James Joyner observed in November 2010, there seems to be another factor at play:
[A]nother — not mutually exclusive — explanation exists. It turns out that Rasmussen’s results, which are robocalled, don’t include cell phones. Neither do the Fox numbers. That alone could account for a 3-4 point Republican skew.
The cell phone issue is an important one. As I noted last month, there is clear evidence that polls that don’t include cell phones tend to understate the President’s support and, indeed, we saw that very phenomenon in a series of NBC News polls that were released back in May. More and more, it’s becoming quite apparent that polls that exclude cell phones, like Rasmussen’s do, aren’t getting a clear picture of the electorate. I’m not sure that the cell phone issue alone would account for Rasmussen’s bias toward the GOP, though. Public Policy Polling is also a robocalling firm and they most assuredly don’t display a bias toward the GOP, indeed if they do have a bias it’s toward the Democratic candidate more often than not. Of course, PPP doesn’t weight for Party ID in the manner that Rasmussen does so that likely explains why they are usually so dissimilar.
Scott Rasmussen runs his polls based on the theory that he, and only he, knows what the makeup of the electorate is going to be, and most especially exactly how many self-identified “Republicans,” “Democrats,” and “Independents” will show up on Election Day. As we learned during September’s “Poll Denialist” kerfuffle, though, his polling model is based on assumptions that most other pollsters don’t accept. There is weighting that goes on in polling, and some of it is absolutely necessary. If a particular set of raw poll numbers seem to have drawn, at random, an oversample of a particular demographic group such as race, gender, or age, then what most pollsters typically do is adjust those numbers by using the data provided by the United States Census Bureau. If they are dealing with a likely voter poll, they will also likely use Exit Poll information from previous elections as a guide as well. For example, Exit Polling seems to consistently show that women vote in a slightly larger amount then men, and that older people vote in a large percentage than younger people. However, the data for Party ID isn’t nearly as well-established. People tend to change their Party self-identification depending on the political mood of the country This is especially true of that large group of people who call themselves “Independents,” many of whom are actually Republicans or Democrats who, for whatever reason choose to use the “Independent” label. Often, those people will move back toward their party as their support for a candidate becomes more solid. So, weighting your poll on something that seems to fluctuate as much as Party ID doesn’t seem to make a whole lot of sense, which is why most pollsters don’t do it.
Perhaps it will turn out that Rasmussen’s model is correct this year, we won’t know that until Election Day. At the moment, though, it’s simply undeniable that this model is producing results which are giving a boost to Republican numbers in a manner that none of the other pollsters are.
H/T: Taegan Goddard