A bookmaker displaying odds on the result of the British general election outside the Houses of Parliament in central London on May 6, 2015, the day before the election. (Leon Neal/AFP/Getty Images)
The following is a guest post from political scientist Benjamin Lauderdale of the London School of Economics, one of the authors of electionforecast.co.uk.
*****
The 2015 election has been another triumph for the British exit poll team, which includes John Curtice (University of Strathclyde), Jouni Kuha (LSE), Rob Ford (Manchester), Steve Fisher and Jon Mellon (Oxford). In 2010, when the exit poll came out as the polls closed, there was general incredulity that the Liberal Democrats had lost seats despite the “Clegg surge” in the national polls that followed the first debate. Yet, the exit polls proved to be remarkably on target: predicting Conservative (Con) 307, Labour (Lab) 255, Liberal Democrat (LD) 59 vs. a final result of Con 306, Lab 258, LD 57.
The exit poll released Thursday night met with similar skepticism on the BBC and other broadcasts when it arrived predicting Con 316, Lab 239, LD 10, Scottish National Party (SNP) 58. Paddy Ashdown, former leader of the Liberal Democrats, promised to eat his hat if they proved to be correct, and simply refused to speculate on their implications. Most forecasters, (my team at electionforecast.co.uk very much included) had the Conservatives only narrowly ahead of Labour on seats, and a few had Labour ahead. Our final forecast was for Con 278 (90 percent interval 252-305), Lab 267 (240-293), SNP 53 (47-57), LD 27 (21-33). In our simulations, we had a 1.3 percent chance of Con at 316 or more and a 4.8 percent chance of Lab at 239 or lower, so these numbers were surprises given the polls, but they were not (yet) as extreme as some historical poll errors in Britain (i.e. 1992). However the most surprising result of the exit poll was not actually the Conservative lead over Labour, but rather the decimation of the Liberal Democrats. While many pollsters expected the Liberal Democrats to receive a national vote share similar to what they actually received, they did a much poorer job of converting those votes into seats than many expected based on polls of the seats where they had incumbents.
The 2015 exit poll was not quite as close to the mark as the 2010, as the final results deviated even further from expectations than the exit poll predicted. The likely totals are Con 329, Lab 233, LD 8, SNP 56 as of this writing (594 of 632 GB declarations) [JT: as of 7 a.m. Eastern time, BBC now predicting Conservative 331, Labour 232, the Lib Dems 8, the SNP 56, Plaid Cymru 3, UKIP 1, the Greens 1 and others 19.]. In both 2010 and 2015, the exit poll surprised because it was very different than what the pre-election polls seemed to imply. This was obviously more than a bit off, and no one did much better (Steve Fisher’s ElectionsEtc appears to have come closest, but both the Tories and Liberal Democrats were still outside his 95 percent prediction intervals). This also includes the betting markets (which had higher tail probabilities of a Conservative majority, but these were still not very high). You could have made a great deal of money betting on the Liberal Democrats securing fewer than 10 seats.
So why have the exit polls tended to be so much more accurate than the pre-election polling? The biggest reason is that the exit poll simply has a better design than any pre-election poll can use. The exit poll is a panel design, where most of the polling stations that are exit polled are retained from the last election. This provides at pretty good solution to one of the fundamental difficulties of polling: that the sample you draw might be unrepresentative of the target population. The exit poll sample may still be unrepresentative. But if you know the relationships between your exit poll data and the actual results from the previous election, for each polling station, it is a lot easier to figure out how to interpret the changes you see in the exit poll data you get in this election. YouGov, taking advantage of its online panel, attempted to take advantage of a similar logic by re-interviewing respondents who answered vote intention question in 2010, however this seems not to have resolved the problem.
The question of why the pre-election national polls were so far off on the Labour-Conservative margin in 2015 is going to be the subject of extensive investigation over the coming weeks and months. We simply do not know what went wrong yet. The error appears to have been nearly the same magnitude as in 1992, and in the same “Shy Tory” direction. The 1992 error led to a lengthy period of internal assessment within the British polling industry. That error led to, among other changes, a push by many pollsters to use political rather than just demographic weighting (past vote or party identification), but 2015 shows that this is still far from a panacea. As Kieran Pedley noted before the election, the upheavals in British politics in the last few years gave some reason to fear another big polling error in 2015. At the moment, while we have a pretty good idea of why the exit poll tends to work well, we do not have a very good idea of why the pre-election telephone and online polls have proved so unreliable in British general elections.