Zogby’s Biased Polls
Last Friday, WaPo’s Dana Milbank, Mystery Pollster Mark Blumenthal, and American Association for Public Opinion Research president-elect Nancy Mathiowetz (cited by Blumenthal) took issue with a John Zogby survey commissioned by Judicial Watch using loaded questions in a poll about Hillary Clinton:
304. Some people believe that the Bill Clinton administration was corrupt. Whether or not you believe the Clinton administration was corrupt, how concerned are you that there will be high levels of corruption in the White House if Hillary Clinton is elected President in 2008?
26% Very concerned
19% Somewhat concerned
20% Not very concerned
33% Not at all concerned
1% Not sure
305. When thinking about Hillary Clinton as a politician, which of the following best describes her?
17% Very corrupt
25% Somewhat corrupt
21% Not very corrupt
30% Not at all corrupt
7% Not sure
Interestingly, while finding the wording “comically biased,” Blumenthal finds that the actual responses are quite similar to a non-biased survey conducted months earlier and concludes, “While the comparison is obviously imperfect, the lesson here may be that well developed opinions tend to be more resistant to manipulation by leading questions.”
This isn’t the first time Zogby has been caught doing this type of polling. Last month, Heritage’s Tim Kane caught Zogby biasing a survey that found 72 percent of U.S. troops supporting withdrawal from Iraq.
Zogby International believes that survey questions need not be neutered, and that there is more than one way to ask a question while maintaining fairness and even-handedness in search of an unbiased response. Questions formulated using the widely accepted Likert Scale are an example of how respondent reaction to sometimes controversial stimuli can be gauged in an unbiased manner. In a similar way, we believe the questions for Judicial Watch produced even-handed and unbiased responses to a controversy in American politics today.
The problem here is that survey research companies have a mixed incentives depending on the purpose to which a survey is going to be put. If a firm is doing the survey for the client’s internal use, as in campaign work, advertising dial groups, product focus groups, and the like, then the incentive is to ensure that the data accurately captures the views of the target universe. Conversely, if the poll is being created for public relations purposes, whether to lobby Congress, to be used as media talking points, and so forth, then there is a rather powerful incentive to slant questions in a way that will produce the desired results.
When firms do both kinds of work–and most do–the latter type of work may harm their public credibility. Presumably, though, clients are savvy enough to understand the differences in these kinds of work and can differentiate PR polls from this-is-how-it-really-is data.