Home > News > Voters vote, non-voters don’t. Why is this so hard for pollsters?
142 views 6 min 0 Comment

Voters vote, non-voters don’t. Why is this so hard for pollsters?

- January 22, 2016
A voted places a sticker on her sweater after voting on Election Day Tuesday, Nov. 6, 2012 in Bradfordton, Ill. (AP Photo/Seth Perlman)

Pollsters aren’t very sophisticated when deciding who is and isn’t going to vote on Election Day. You’d think they would have research-backed strategies for assessing who is likely to vote. They don’t. Pollsters usually just ask respondents if they plan to vote and take respondents at their word.

These “likely voter screens” often rely on methods that are ludicrously simple and inaccurate. For example, CNN/WMUR recently released a poll with interviews conducted by landline and cellular telephone on Jan. 13-18 that asked participants if they “plan to vote in the Democratic presidential primary.” The results of that poll led to headlines like “New Polls Show Bernie Sanders With a Clinton-esque Lead in New Hampshire, Gains Nationally” which influenced presidential primary race coverage.

But our research (with Masa Aida), along with the findings of a recent Pew survey, suggests that’s terribly ineffective at actually screening for likely voters — and therefore, at predicting who could be elected. This kind of polling misleads the media, researchers, campaigns and the public about the state of the race and the viability of different candidates.

That matters. Political polls affect discussions about front-runners and strategy during campaigns. They can influence public opinion by nudging people toward supporting seemingly ascendant candidates. That’s especially true for horse race polls — polls that measure candidates against each other.

Worse, newspapers, political blogs, pundits and campaigns encourage this type of polling — even though horse race polls 300 days before Election Day do not predict Election Day results.

Pollsters can do better.

Who really votes, or doesn’t?

Across seven pre-election surveys, over three elections, including more than 31,000 total survey respondents, Masa Aida (Civis Analytics) and Todd Rogers (one of this piece’s authors) found that many self-predicted voters do not show up to the polls (“flake-out”) — and that many self-predicted non-voters do go vote (“flake-in”).

We also found that self-predicted voters differ from actual voters. Actual voters are more likely to be disproportionately white, older, and partisan. That is, self-predicted voters better represent the U.S. population than actual voters — but misleadingly so.

Past voters tend to vote. Past nonvoters tend to not vote.

Past voters tend to vote. Past nonvoters tend to not vote.

How pollsters can fix this: the technical details

But all is not lost. We uncovered two easy solutions to this forecasting error.

First easy fix. Pollsters could use the voter file to sample respondents and to develop a hybrid approach, incorporating the voter history and self-prediction to properly predict whether that person is going to vote in the next election (Green and Gerber, 2006 — PDF).

Using the hybrid approach, we found much higher accuracy in predicting voter behavior. For example, 93 percent of self-predicted voters in the 2008 general election who were confirmed as having voted in the past two elections actually did vote on Election Day. When we incorporated self-predicted voters who were confirmed as not having voted in either of the past two elections, we found only 62 percent of respondents actually cast a ballot in the 2008 Election.

What about voters who have voted before but don’t intend to vote in the next election? Interestingly, 76 percent of people who voted in the past two elections and self-predicted they would not vote in the 2008 General Election surprised us, or themselves, on Election Day: They voted.

Clearly, the merging of voter file history with self-predicted vote not only yields a more accurate sample of voters, but is a far superior choice to taking voters at their word.

But using the voter file for a polling sample isn’t always feasible.

Second, even easier fix. An even easier solution is to simply ask survey participants about their past voting history. We found that people are surprisingly accurate in reporting whether or not they voted in past elections. Using people’s recalled vote history is better at predicting who will vote than using people’s self-predictions about whether they will vote.

These two alternatives can simultaneously increase polls’ predictive accuracy and the American public’s faith in political polls.

Polling organizations need only recognize that (voting) history repeats itself.

Todd Rogers (@Todd_Rogers_) is associate professor of public policy at the Harvard Kennedy School, and director of the Student Social Support R&D Lab. He was founding executive director of the Analyst Institute, which uses randomized field experiments and behavioral science insights to understand and improve voter communication programs.

Adan Acevedo is a research fellow at the Harvard Kennedy School’s Center for Public Leadership. He was not affiliated with the original research in this article.