In this Nov. 4, 1948 file photo, President Harry S. Truman holds up an election day edition of the Chicago Daily Tribune, which, based on early results, mistakenly announced “Dewey Defeats Truman.” (BYRON ROLLINS/AP)
We rely on opinion polls, not just to get a line on the horse race that is the presidential election campaign, but also to learn what Americans (and people in other countries) think about important issues. As George Gallup said many decades ago, polling is central to modern democracy.
But there are real and growing concerns that polls can’t always be trusted. In statistics jargon, we talk about “total survey error.” That includes not just sampling error (the familiar “margin of error” of plus or minus three percentage points or whatever) but also nonsampling error, either coming from survey responses we don’t believe or from a pool of respondents who do not match the population of interest.
I’m speaking at a panel on a conference in September on Total Survey Error; the other participants are Cliff Zukin (professor of public policy and political science at the School for Planning and Public Policy and at the Eagleton Institute of Politics, Rutgers University) and Scott Keeter (director of survey research at Pew Research Center). Zukin recently wrote an op-ed, “What’s the Matter With Polling?” In it, he says:
Election polling is in near crisis, and we pollsters know. Two trends are driving the increasing unreliability of election and other polling in the United States: the growth of cellphones and the decline in people willing to answer surveys. . . . To top it off, a perennial election polling problem, how to identify “likely voters,” has become even thornier.
Zukin writes:
The 1991 Telephone Consumer Protection Act has been interpreted by the Federal Communications Commission to prohibit the calling of cellphones through automatic dialers, in which calls are passed to live interviewers only after a person picks up the phone. . . . Dialing manually for cellphones takes a great deal of paid interviewer time, and pollsters also compensate cellphone respondents with as much as $10 for their lost minutes.
Interesting. This seems fair enough to me; the respondents are helping the survey organization so they deserve to be compensated for their time. As a non-cellphone user, I just wish there were a similar law banning the use of automatic dialers more generally. If a survey isn’t worth the time of a human dialer, maybe it’s not worth the time of the respondents either.
[What happened with the British election polls?]
After reviewing various difficulties with real-world polling, Zukin concludes:
Those paying close attention to the 2016 election should exercise caution as they read the polls. . . . We are less sure how to conduct good survey research now than we were four years ago, and much less than eight years ago. . . . Polls and pollsters are going to be less reliable. We may not even know when we’re off base. What this means for 2016 is anybody’s guess.
Fair enough.
There’s only one place where Zukin steps off the cliff, and it’s on a technical issue which, as a statistician, I noticed right away. Zukin writes:
Statisticians make a primary distinction between two types of samples. Probability samples are based on everyone’s having a known chance of being included in the sample. This is what allows us to use mathematical theorems to confidently generalize from our sample back to the larger population, to calculate the odds of our sample’s being an accurate picture of the public and to quantify a margin of error.
So far, so good. In real life there are no probability samples of humans. With survey response rates below 10 percent, there is no way to know the probability of an individual being included in the sample. You can know the probability that the survey organization will try to reach a person — that’s easy, it just depends on exactly how the address or telephone number or e-mail is sampled from a given list. But it’s impossible to know the probability that this person will actually be included in the sample, as this depends on the probability that the person is reached, multiplied by the probability that he or she agrees to respond, given that he or she is reached. And neither of these two probabilities is ever known.
So the “probability sample” is a useful assumption in that it allows some helpful mathematics, which allows us to compute the “margin of error,” which can be considered as a lower bound on the uncertainty associated with a survey estimate, in a sort of best-case scenario in which nonresponse is random. It’s like the “frictionless puck” in classical physics: a simplifying assumption that allows us to make some calculations that, in a low-friction world, can be reasonable approximations.
But here’s where Zukin gets confused. He continues:
Almost all online election polling is done with nonprobability samples. These are largely unproven methodologically, and as a task force of the American Association for Public Opinion Research has pointed out, it is impossible to calculate a margin of error on such surveys.
Sorry, but no, and I really hope Zukin isn’t teaching this to the public policy students at Rutgers. Actually, all online election polling (not “almost all”) is done with nonprobability samples. And so is all telephone polling. And all face-to-face polling. And all mail polling. Remember: “Probability samples are based on everyone’s having a known chance of being included in the sample.” And we never know this probability.
As I’ve discussed elsewhere, just about any survey requires two steps: sampling and adjustment. As survey researchers, we want to do our best at each step: good sampling makes adjustments more reasonable, and good adjustment can fix problems with the sampling. But I don’t think anything is gained by garbling statistical definitions and wrongly implying that traditional telephone polling is based on everyone’s having a known chance of being included in the sample. Or incorrectly stating that it’s impossible to calculate a margin of error on such surveys.
Fortunately, Zukin’s error was buried in the middle of a long op-ed, so maybe nobody noticed it. And in any case, we’ll have a chance to correct it in our panel discussion in a few weeks.