Home > News > A poll is a snapshot, not a forecast
109 views 4 min 0 Comment

A poll is a snapshot, not a forecast

- December 10, 2013

Here on the Monkey Cage, Kenneth Bunker and Stefan Bauchowitz wrote:

The 2012 presidential election in the United States saw an increase in the popularity of poll aggregators . . . forecasts were more accurate than those of any given pollster. They proved that aggregating polls could be more useful than using a single poll to predict the results of an election.
This prompted us, at TresQuintos, to develop a model of our own. We tested it on the 2013 presidential election in Chile, which took place Nov. 17.  . . . We also proved that aggregating polls could be more useful than using a single poll to predict the result of an election.

So far, so good. The polls are out there, let’s put them together and do better. I particularly like that Bunker and Bauchowitz want to make use of structural regularities in elections, which is the sort of thing that Kari Lock and I (and various others) have done to combine information to get good forecasts months before the election. At a more technical level, Bunker and Bauchowitz had to deal with the challenges of multiparty elections, which are inherently difficult to predict (and thus, with potentially much to gain from a model-based approach). As they put it:

The stable political system in the United States, together with the high frequency of polls, favored a model based on few assumptions. By contrast, Chile’s unstable political system, together with the low frequency of polls, forced us to build a model with additional assumptions.

Yup. Of course, assumptions can go wrong, but that’s okay: When assumptions go wrong, that’s when we learn something. No pain, no gain.
But here’s the part I don’t quite agree with. Bunker and Bauchowitz write:

Some critics argue that forecasts made by poll aggregators should not be compared to predictions made by pollsters. We believe the contrary; poll aggregators and pollsters are essentially at odds. They compete against each other to get the numbers right.

I do agree with the above passage in the following sense. If pollsters release something that they call a forecast, then, sure, it’s only fair to treat it as such, and if you can beat the forecast, you deserve credit for it. But more generally, a poll is a snapshot, not a forecast. A poll can be useful in constructing a forecast (see, for example, our recent post, “Republicans on track to retain control of House in 2014,” which is based on work of Bafumi, Erikson and Wlezien to forecast midterm election results many months ahead of time given generic-ballot polls), but I think it’s important to separate the two things:
At the first stage, a poll is a snapshot. It can be a good snapshot or a bad snapshot (for example, because of problems with question wording, sampling or nonresponse).
At the second stage, various information, including snapshots, can be combined to make a forecast. As Bunker and Bauchowitz say, this is not always so easy, especially in an environment with an unpredictable outcome and sparse information. Successful forecasters deserve credit, but I see what they’re doing as making use of the polls, not competing with them.