Home > News > Tracking public opinion with biased polls
125 views 5 min 0 Comment

Tracking public opinion with biased polls

- April 9, 2014

The traditional gold standard of polling is probability sampling, where you contact people selected at random from a list of the population. But probability sampling isn’t so great anymore. With response rates in the 10 percent range, there is concern that the select group of people who happen to respond to surveys are nothing like a random sample of the population of adult Americans, or even the population of voters. (It’s reasonable to suppose that the sort of people who are more likely to vote are also more likely not to hang up on a pollster.)
An alternative approach is opt-in polling, often performed on the Internet. You contact lots of people in a non-invasive way and then interview those people who choose to respond. The coverage of an opt-in poll depends a lot on how it is conducted—where the pool of potential respondents is coming from—and the resulting sample will typically be far from the target population in many obvious ways. You’ll have to do adjustment of the data to match the population. Survey adjustment is recommended even for probability samples (because, as noted above, non-response renders such samples non-representative), but it’s essentially required for opt-in polls.
That said, opt-in polls offer several advantages, most notably convenience and cost, as well as the ethical advantage that respondents are not hassled so much to participate. (In polls that are conducted using probability samples and analyzed accordingly, it is traditional to keep bugging people over and over to get them to participate in the survey, so as to avoid non-response problems.) In an era of autodialing and robopolls, ethical concerns of hassling potential respondents are more relevant than ever.
The question then arises, do non-representative polls work, in the sense of giving reasonable estimates after adjustments? Our answer is yes, at least for the task of tracking election campaigns, a convenient example because we can compare our estimates to a large set of existing polls and also the actual election outcome.
The project is a collaboration with Wei Wang of the Columbia University statistics department and Sharad Goel and David Rothschild of Microsoft Research. We analyzed data from an opt-in poll from the Xbox—that’s right, the Xbox gaming platform—collected during the month or so before the 2012 presidential election. Here’s what we found, in a paper that is under review for the International Journal of Forecasting:

With proper statistical adjustment, non-representative polls can be used to generate accurate election forecasts, and often faster and at less expense than traditional survey methods. We demonstrate this approach by creating forecasts from a novel and highly non-representative survey dataset: a series of daily voter intention polls for the 2012 presidential election conducted on the Xbox gaming platform. After adjusting the Xbox responses via multilevel regression and poststratification, we obtain estimates in line with forecasts from leading poll analysts, which were based on aggregating hundreds of traditional polls conducted during the election cycle. We conclude by arguing that non-representative polling shows promise not only for election forecasting, but also for measuring public opinion on a broad range of social, economic and cultural issues.

And here are the graphs. Indeed, the Xbox sample looks a lot different from the general population of voters:

And the unadjusted estimate from the Xbox sample is terrible:
Screen Shot 2014-04-08 at 8.44.49 PM
But our adjusted estimate is pretty good–better, in fact, than the averaged series from Pollster.com. Here’s our series, converging nicely to the election outcome:
Screen Shot 2014-04-08 at 8.45.04 PM
And we also get good estimates in the 12 largest states:
Screen Shot 2014-04-08 at 8.45.18 PM
The moral of the story is not that Xbox always wins or that a non-representative poll will always do fine. It’s all about the adjustment. For a political poll with background variables such as age, ethnicity, state, and previous vote, we have a lot of good information that allows a sharp adjustment. In more unknown settings, we have to be more careful. But for many purposes it looks like we can move beyond the brute force approach of calling thousands of people on the phone.