Home > News > How do you measure 'democracy'?
171 views 10 min 0 Comment

How do you measure 'democracy'?

- June 23, 2015

A woman prepares to vote in the local elections in the town of Khimki outside Moscow on Oct. 14, 2012. (KIRILL KUDRYAVTSEV/AFP/GETTY IMAGES)
One of the great challenges for policymakers is taking abstract concepts like “power” or “democracy” and using them to measure concrete policies. Each year, for example, the United States spends several billion dollars on democracy promotion. It would be great to know – not just for government officials, but for all of us – whether this money actually helps to nudge countries toward democracy.
The problem is figuring out what we mean by democracy. Somewhere in the vast space between Norway and North Korea is a gray zone composed of countries that are neither full democracies nor full dictatorships, and sometimes it can be extremely hard to measure the quality of their government.
A story of two post-Soviet states can illustrate the difficulty. Look at the graph below – the two counties started at about the same level of democracy after the Soviet collapse. But one saw the rise of crony capitalism, growing restrictions on media freedom, and the increasing centralization of presidential power. The other cut down on corruption, established economic ties with the West, and had a parliament that successfully stood up to the president’s unpopular choice of a prime minister. By the turn of the century, as the graph shows, their levels of democracy had completely diverged:

Two post-Soviet nations' differing levels of democracy. (Data: Polity IV, Freedom House. Figure: Seva Gunitsky)

Two post-Soviet nations’ differing levels of democracy. (Data: Polity IV, Freedom House. Figure: Seva Gunitsky)


 
Any guesses for what these countries might be? Perhaps one of the Baltics for the “good” case, or Belarus for the “bad” case?
I’ll save you the suspense. What the graph actually shows are two measures of democracy for the same country: Russia. The solid line is a measure of Russian democracy by a commonly used index called Polity IV. The second, decidedly less optimistic scenario is the same measure done by Freedom House, another popular and well-known index of democracy.
Both Polity and Freedom House are commonly used indices that claim to measure the same phenomenon. (In Google Scholar, they are mentioned about 15,000 and 47,000 times, respectively). So how did they come away with such vastly different portrayals of Russian democracy? In both cases, plausible narratives could be constructed to explain the outcome – as I did just two paragraphs ago – but which one is closer to reality?
Anyone who has ever used a measure of democracy in their research should be very frightened by this graph. And anyone who has tried to figure out what makes countries more or less democratic should be worried. How can we begin to answer basic questions about the causes of democracy if we cannot even agree on what democracy is?
The problem is that democracy means different things to different people. Polity really cares about constraints on the elites – how much the president is checked by parliament, for example. In 1998, the Russian Duma rejected Yeltsin’s nomination of Viktor Chernomyrdin as prime minister (the unpopular Chernomyrdin received only 94 votes out of a possible 450). The rejection signaled legislative independence, and since executive constraint is the most important sub-component of Polity, the supposed increase in the power of the Duma led to a significant increase in the country’s score.
Freedom House, on the other hand, cares much more about individual rights and personal freedoms. By that measure, it sees Russia as a pretty bad place. In fact, it ranks the country on a par with places that allow the death penalty for women who commit adultery. As a result, it has been accused of having an anti-Russian bias – a charge the organization has consistently denied.
Disagreements like these are less about proper measurement and more about a philosophical debate on what counts as a democracy – an inherently complex and contested idea. This is a problem, because it suggests that there are no easy solutions. The Economist, for example, produces a measure of democracy that sees mandatory voting as bad for democracy because it infringes on individual rights. If I want to stay home and watch football on Election Day, that should be my right, goes the argument.
At the same time, mandatory voting clearly increases voter turnout, particularly by the poor, who have few other levels of political influence and tend to stay home when voting is voluntary. You could argue, therefore, that mandatory voting improves the quality of democratic representation. The Vanhanen measure of democracy, for example, is based in part on measuring the percentage of the population that votes in elections. Countries with mandatory voting therefore receive a higher score on the Vanhanen index and a lower score on the Economist index. Here, mandatory voting is punished or rewarded based on whether you think mass participation or individual freedom is the true hallmark of democracy. This disagreement reflects a fundamental trade-off about democracy’s essential nature – a philosophical choice rather than a methodological one. To say that one measure is more accurate than another misses the point, since they represent different visions of a highly complex phenomenon.
This is potentially a huge problem. As I show in some recent research, measures of democracy in the post-Soviet space often disagree about particular countries, and occasionally even draw contradictory conclusions from observing the same event. Good measures are especially important for evaluating mixed regimes, where foreign assistance can make the biggest difference, and where policymakers need accurate assessments of the impact of foreign aid. Yet those are precisely the countries where our measures fare the worst, creating the potential for misguided policies and wasted efforts.
So what is to be done? These problems don’t mean that we should get rid of democracy measures completely – but it does mean that we should be extremely careful in using them. Two caveats are necessary. First, when a measure is chosen, its inherent biases have to be made clear when trying to figure out what causes democracy. Second, the choice of measure has to be justified in relation to what is actually being examined. Freedom House should not be used to evaluate democracy’s relationship with corruption or economic equality, for example, because these are already built into the measure itself. Polity IV measures might be appropriate for research that examines constraints on governing elites, but not for studying the expansion of suffrage over the nineteenth century (Polity IV, as you might have figured, does not really care about who gets to vote).
Measures of democracy can mislead as much as they clarify, as the figure above demonstrates all too clearly. This is a problem not just for academics, but for policy-makers and anyone who cares about democracy more generally. And while a more careful approach to these measures is hardly a panacea, knowing the drawbacks and assumptions of particular measures can only serve to improve our understanding of democracy. Highlighting the limitations of a measure can also highlight its strengths.
Seva Gunitsky is an assistant professor in the Department of Political Science at the University of Toronto. During 2014-15, he is a Fung Global Fellow at Princeton University. This post is related to his research for a recent edited volume on state rankings, published by Cambridge University Press.