In October 2017, the Supreme Court heard oral arguments in a major case, Gill v. Whitford, that for the first time might impose constraints on partisan gerrymandering.
My research played a role in this case, although I am not part of the legal team. I invented a measure of partisan advantage in redistricting — the “efficiency gap” — that the plaintiffs in the case have relied on. I also filed a lengthy brief in the case that sought to inform the court about the available metrics and the relationships between them.
So I was prepared for the efficiency gap, if not my name, to come up in oral arguments. But I wasn’t quite prepared for how my work and my views would be described. It is important to correct a couple misconceptions.
First, Justice Samuel Alito appeared to believe that my own views undercut the arguments of the plaintiffs — the very side that was drawing on my research. Alito quoted this from an earlier paper of mine as saying that “redistricting is at best a blunt tool for promoting partisan interests.” He felt that conclusion undercut the plaintiffs’ “dire picture about gerrymandering and its effects.”
It would indeed be surprising if the inventor of measure intended to capture gerrymandering thought that gerrymandering is no big deal. But understood in context, the quote tells quite a different story.
The paper referenced by Alito drew on data from the 1970s through the 1990s. At that point, the impact of gerrymandering was, in fact, relatively small. I have since extended the analysis with more recent data, and the results indicate that the effect of gerrymandering is now much larger. In fact, the size and scale of partisan gerrymandering has accelerated rapidly in just the last two redistricting cycles, as voters have become more committed to their parties and the tools for drawing gerrymanders have become more sophisticated.
This doesn’t mean that every state is enacting extreme gerrymanders. In fact, most of the current plans are pretty fair: A court intervention now would probably be limited to a few extreme cases. But people are rightly worried that partisan manipulation of district boundaries will continue to increase.
Second, my name also came up as part of a broader claim that the research in this area is still too unsettled to merit court action. Alito also raised this concern, noting that my key papers were published only in the last few years. He highlighted my criticisms of older measures such as partisan bias to suggest scholars are running in circles, with the implication that if the court relies on this work, it will find it completely debunked in a few years.
I have certainly argued against these other measures in earlier work, and my own measure is undoubtedly new. But this does not imply hopeless incoherence. Such disagreement is often how progress occurs. When the disagreements go well, the result doesn’t reinvent what’s come before; it extends and improves this earlier work. Existing methods are not discarded but elaborated and refined, allowing for measurement and understanding in new situations.
So it is with measures of partisan bias in redistricting. The efficiency gap is closely related to the earlier measures in competitive states like Wisconsin. Thus, when it comes to the Whitford case, these measures all tell the same basic story: The Wisconsin state legislative map has a significant bias in favor of Republicans. In fact, despite producing the same answers as the efficiency gap, one can argue that the earlier measures are more intuitive for such competitive states. They directly capture the notion that a party is able to maintain a majority of the seats when it has a minority of the vote, an important violation of democratic norms. But the efficiency gap then extends the basic logic of these other measures to uncompetitive states, expanding the measurement of partisan bias to more situations.
This is not the only progress that has occurred since the court last heard a redistricting case of this kind. Scholars have developed sophisticated new computing tools that automatically generate many different plans. They have also leveraged the coordinating power of the Internet to easily solicit draft plans from the wider community.
Both approaches can give a better sense of the range of possibilities in a state and the likelihood that the plan was a deliberate effort to draw a gerrymander. Indeed, many justices expressed interest in these techniques in the oral arguments. Like the measures of partisan advantage discussed above, they often elaborate and extend the logic of simpler comparisons, accounting for some limitations of the previous approaches but sometimes still producing the same result.
These new measures do not come out of left field. In fact, they do things the Supreme Court expressly asked for in the last set of redistricting cases: they use the actual election results instead of requiring hypotheticals about elections that have not occurred, and they measure the political geography of a state more accurately than ever before.
In short, the progress in this area has been impressive. These new metrics offer exactly the sort of information the Supreme Court itself has been asking for. And they tell a similar story: gerrymandering has been getting worse.
If the court wants to intervene, it has more evidence than ever before.
Eric McGhee is a political scientist who works on elections and electoral reform issues.