Immigration Polls and Dog Whistle Effects
Mark Blumenthal uses vastly different results in by major polls on the immigration issue conducted by NBC/WSJ and CBS/NYT. “Both ask about essentially the same provision in the bill, and include many of the same elements of that proposal, yet show diametrically opposite results.”
The NBC/WSJ poll is headed up by Democrat Peter Hart and Republican Neil Newhouse, two of the top pollsters in the business. (I should note here that Newhouse is my wife’s boss and a friend. Still, there’s no questioning either man’s credentials.) The CBS/NYT poll is unsigned but I presume they’re using top-notch folks as well.
Why, then, are polls conducted by seasoned professionals with no incentive to slant questions to please a client so different?
The challenge in interpreting any of these results is to remember that most Americans have, at best, only a vague sense of what the immigration bill does, much less which provisions they favor. As I wrote previously (especially here and here), survey questions about specific proposals largely measure reactions, not preexisting opinions about the proposals. Respondents tend to listen to the text of the question and form opinions on the spot. As such, the results can vary greatly depending on the way the pollster asks the question.
Blumenthal breaks down the nuances of the questions and explains why they would lead respondents with little previous knowledge of the issue (which is to say, respondents) to give different answers. My instinctual preference is for the NBC/WSJ question, which is cleaner and has fewer potentially confusing qualifiers, but the CBS/NYT version provides more information and may well produce more accurate results.
The point, though, is that we really don’t know. Polling on issues where public opinion is unfixed (again, most issues) is simply problematic. Blumenthal’s includes himself in the assertion that,
Unfortunately, media and campaign pollsters know little about how respondents hear these sorts of questions or what pieces of information they weigh most heavily in their answers. We often refer to these sorts of differences as “dog whistle effects” — the respondents seem to hear things we miss. There is a way to “debug” this sort of question (hint: Google “cognitive pretesting”), but it requires far too much lead time and costs far, far too much to be practical for media and campaign pollsters. So we are left to speculate about how to interpret the results.
The more results tend to diverge, the greater the odds that respondents are confronting the proposal for the first time and are simply reacting, drawing upon real (and sometimes conflicting) attitudes triggered by the information provided in each question.
Polling is a useful tool for gauging public opinion. We too often presume, though, that the public has an opinion. On an emotional issue like illegal immigration, people have very strong attitudes which are often in conflict. That doesn’t mean they have a firm grasp on the details of complicated public policy proposals.
Hell, I’m not sure most Members of Congress understand this bill.