How can we tell whether election results accurately reflect voters’ intentions?
The question can be broken into parts. Are proper procedures followed? Are votes correctly counted? Are voters coerced? Is the election fraudulent?
Right now, the main approaches to detecting election fraud are, first, to use observers to watch polling stations or, second, to check vote counts at a sample of polling stations and compare them to the officially reported totals.
But there’s another approach to assessing election accuracy: Election forensics, or applying statistical tests to reported election data. Polling station vote and turnout counts should have different characteristics if things go right than if things go wrong.
The Election Forensics Toolkit website, developed by Walter Mebane and Kirill Kalinin, is a prototype that implements several methods that have been proposed as useful accuracy diagnostics. The website was developed as part of a project conducted by a team from the University of Michigan (the authors of this article along with Kalinin and Jonathan Wall) and the University of Maryland (David Backer), with funding from USAID.
Election forensics methods are based on the idea that human manipulations tend to leave distinctive traces in vote counts. Some manipulations are frauds and some stem from the normal strategic activities inherent to politics. Anomalies should be detectable using statistical techniques.
Detecting anomalies is one thing; distinguishing frauds from normal politics is more difficult. The Toolkit reflects the idea that anomalous patterns are more likely to be evidence of fraud if found consistently across multiple statistical techniques.
In our work for USAID, we analyzed elections in eight countries in 2014 and earlier years. Here we illustrate Toolkit methods by examining the June and November 2015 legislative elections in Turkey; for more details, see this working paper. Turkey’s Justice and Development Party (AKP) lost control of the legislature after the June election but regained control in November.
We use three kinds of statistics from the Toolkit: a measure that picks up when people manipulating votes want their actions to be detected (called “P05s”); a measure that responds to both strategic behavior and frauds (“2BL”); and a measure that detects when one party is both faking turnout and stealing votes from other parties (“finite mixture model”). We also illustrate how the Toolkit supports using “hot spot analysis” to study statistics’ geographic distribution.
The P05s statistic measures the proportion of polling station turnout percentages with a last digit of zero or five. Studies suggest that people trying to manipulate election results often use the last digits of turnout percentages to “signal” that they have manipulated turnout in order to get credit for their efforts.
The 2BL statistic examines the mean of the second digits of polling station vote counts. The second digits of vote counts sometimes respond to frauds, but they also respond to the number of parties competing, to the balance of support for each party, to strategic voting behavior and to other things (see Chapter 9 in this book).
The finite mixture model estimates the probability that turnout and vote proportions have a pattern that is characteristically produced by frauds. Suppose we make a scatterplot to show how polling station turnout proportions relate to the proportion of votes for a party in each polling station. If there are no frauds, we expect to see a central clump (or mode) of polling stations, with other polling stations scattered around it, but frauds produce patterns in which there are several modes.
Hot spot analysis measures whether the mean of values geographically close to a particular observation differs from the overall mean. When we lack precise geographic information about polling stations but have such information about areas that include the polling stations — for example, in Turkey we have geographic information about towns — we perform hot spot analysis based on the mean of the polling stations in each area.
No one statistic offers definitive proof of frauds, but when several statistics differ significantly from what we would expect to see in a normal election process, frauds are likely.
Turkey uses a closed-list proportional representation (D’Hondt) electoral system in 85 districts. The election rules mean we should compute statistics separately for each of the districts.
The “Digit Tests” figure displays the results of computing the P05s statistic for turnout and the 2BL statistic for the vote counts of the leading party. Points shown in red differ significantly from the value that should occur in the absence of frauds or anomalies, while points in blue do not. The figure shows results for November, but results for June are similar.
The red points in the P05s plot (on the left in the figure) indicate that the signaling values occur much more often than we would expect if there are no frauds.
The significantly low values in the 2BL plot (on the right in the figure) arise because in each district usually four parties are competing and some voters act strategically. Means greater than 4.4 are harder to explain as the product of legitimate political processes, so those values probably indicate that the leading party’s votes are distorted — probably artificially increased.
The anomalies exhibit geographic patterns. For example mapping hot spots shows that many of the turnout values that exhibit “signaling” are geographically clustered. Red polygons in the map indicate towns that have means of the “signaling” variable (P05s) significantly greater than the prevailing mean, while blue polygons indicate towns with low means. Activities to manipulate turnout seem to occur especially in regions in eastern Turkey.
A scatterplot of turnout versus the proportion of votes for AKP in Mus district in November illustrates the kind of multimodal pattern that is characteristically produced by frauds. Each point in the scatterplot shows the turnout proportion and the proportion of votes for AKP at a polling station. Instead of a central mode of polling stations around which other polling stations are scattered, there are three modes.
Especially concerning is the mode that occurs for turnout and AKP vote proportions that are greater than 0.8. Such modes can occur if a combination of faked turnout and votes taken from other parties benefited AKP.
Estimates of frauds probabilities from the finite mixture model that detects the occurrence of such patterns show that more than 1 percent of polling stations are affected. In eastern Turkey, the rate of occurrence is sometimes two or three times that.
Mapping hot spots shows that higher than average frauds probabilities occur in eastern Turkey. Towns where these signs of frauds occur significantly more often than average are red and significantly less often than average are blue. The high probabilities of AKP-favoring frauds occur not merely in eastern Turkey but in areas where the party that supports Kurdish interests (HDP, Peoples’ Democratic Party) is the dominant party.
Many districts have anomalous features. The currently ruling AKP is the party that by most measures benefited from the extensive frauds that we detect, especially in eastern Turkey.
The Toolkit is still in development, but it is possible for anyone to analyze election data if the data is in proper form. The help button on the website has a tutorial. Detecting election frauds using published results will continue to get easier as we improve this prototype and continue to develop new techniques.
Walter R. Mebane Jr. is a research associate at the Center for Political Studies, professor of political science and professor of statistics at the University of Michigan. Allen Hicken is a research associate professor at the Center for Political Studies and an associate professor of political science at the University of Michigan. Ken Kollman is director of the Center for Political Studies and professor of political science at the University of Michigan.