POLLING ERROR

Eugene Volokh is highly skeptical of a VNS poll because its sample is 4% Jewish–twice the actual Jewish population. He also has a follow-up here.

Volohk’s explanation for this phenomenon, that some groups are more likely to respond than others, is certainly plausible. However, an even more likely explanation is simple random chance. Unless this trend manifests itself in a lot of polls, it’s almost certainly just random variation. For one thing, the confidence interval of these things is usually at the .05 level*, which means that one in twenty times, the whole poll is bunk. For another, unless there is some serious work to stratify the sample–which there usually isn’t in these polls–the chances that the demographics of any particular poll will match up precisely with the population as a whole are not very high. And that’s even more true of comparatively small groups like American Jews. You’d be much unlikely to find a poll that has, for example, twice the number of Baptists as their actual percentage in society.

*See comments below for discussion of this terminology.

FILED UNDER: Political Theory, ,
James Joyner
About James Joyner
James Joyner is Professor and Department Head of Security Studies at Marine Corps University's Command and Staff College. He's a former Army officer and Desert Storm veteran. Views expressed here are his own. Follow James on Twitter @DrJJoyner.

Comments

  1. John Lemon says:

    Correction, the “.05” you cite is actually the desired level of statistical significance that corresponds to a 95% confidence interval. The actual confidence interval is reported in the actual units of the study (e.g., age, height, % of approval for Bush). The “confidence interval” is actually the same thing as the reported “margin of error,” which with most surveys around 1,100 respondents is about +/-3%.

    With a pure random sample with 100% response rates and a “large” sample size, the sample demographics should match the population demographics on average. Volokh is right to point out that the biggest problems with polls these days are the non-response rates — i.e., there are certain categories of people who do not tend to answer phone polls and these people are much different than those who do on many characteristics. Today’s response rate for your typical political poll hovers between 30 and 40%, down from about 66% a decade ago.

  2. James Joyner says:

    I agree that the problems Volokh cites and you reiterate are problems, too. I’m just saying on a given poll, the most likely explanation for odd results is just that it’s an outlyer. I used to use this article by the chairman of Lou Harris polling to introduce these concepts to my Intro to American Govt. students, in which he notes “the actual margin of error for this poll is infinite. This one by John Zogby is also useful.

    Polling terminology is rather confusing, since people use it differently. I’ve seen the 95% referred to alternately as a confidence interval, confidence level, and level of significance in stats textbooks. See here vs here, for example.

    As a matter of mathematical probability, a truly random sample will approach the universe at a given point of time. But it’s not going to be very precise with small groups. For example, Mennonites, even if they were equally likely to answer polls as Baptists, are much more likely to be over- or under-represented simply because of their relatively small proportion of the population.

  3. John Lemon says:

    Well, I looked at the two definitions you linked to and they are not different. The confidence interval — a.k.a., margin of error at the bottom of tables in newspapers — is measured in the actual units you are studying (and in polls that is often “% support”). The confidence level is the area underneath the normal distribution that is associated with various standard errors (standardized in Z or t-scores). The norm in survey sampling is to choose a 95% confidence interval which is equivalent to +/- 1.96 standard errors. With sample sizes of approximately 1,100 and assuming the “worst case standard error” (where public opinion is evenly split 50/50 on a dichotomous question), you generate the +/- 3% margin of error (confidence interval) for the sample. If one is concerned that a 95% confidence interval will give you bunk 5% of the time, one could alternatively choose a 99% confidence interval (which is frequently done in studies with higher stakes — e.g., drug trials).

    As for the odd results with the Jewish subpopulation, that is not a statistical problem per se, but one related to research design issues (namely errors in random sampling — when different groups of people respond at different rates, your sample is no longer random as there is an element of self-selection). The real problem with small sub-populations is that their small sample size makes it difficult to compare that particular sub-sample with a larger sub-sample since the smaller sample has a much larger standard error, and it is more difficult to “achieve” statistical significance, making it more likely you will commit a Type II error (which has an unknown probability of occuring).

    Trust me on this. I trained in this, taught it for 9+ years, and did it in my venture into the world of small business. …and I’m about to give a final exam on it so I’ve pretty much had it up to here with it. 🙂

  4. James Joyner says:

    Right. I took those classes, too. I even taught a few of them myself:)

    Of course, I just taught sampling as a small subsection of either a social science research class or intro to polisci. I’ve never been a huge fan of surveys for the reasons you cite along with those cited in the Lou Harris article I cite above. Not to mention that there’s not much application of survey methodology to national security policy analysis.

    My recollection of the terminology was wrong. The second link seemed to confirm my usage, as it has the “confidence level” being the 95% and a blank for “confidence interval.” But I see that later on they define it to mean the same as margin of error. It has been a while since I took classes on this stuff (1992?) and some since I taught them (at least three years), although I do recall that the textbooks seemed to use all sorts of different and even conflicting language for the same concepts.

  5. John Lemon says:

    In large part this is related to the fact that statisticians are pretty horrid teachers. The reason I like teaching the stats class is that students have such low expectations of what a good stats prof would be, that I invariably look pretty swell.

  6. James Joyner says:

    Heh.

    My first stats book I used as a teacher was that humongous tomb from Earl Babbie, in its 8th edition or whatever. While I thought it was well written, it was way over the heads of the criminal justice students who mainly populated the class. (The book was foisted upon me my first quarter at Troy State.) The amusing thing is that it came with a video of Babbie himself teaching some little lessons. They guy was such a dork it was unbelievable.

  7. John Lemon says:

    I hear ya on Babbie. It tends to have a great deal of momentum and is on its 11th edition now or something. It is comprehensive, covering research design where most “statistical problems” actually eminate, but it is also very boring and incomprehensible. Fortunately, there are some decent stats books out there now, including a Cartoon Guide to Stats.