Home > News > Chief Justice Roberts and other judges have a hard time with statistics. That’s a real problem
132 views 12 min 0 Comment

Chief Justice Roberts and other judges have a hard time with statistics. That’s a real problem

- October 31, 2017
Associate Justice Anthony Kennedy (far left), Chief Justice John Roberts (center) and Associate Justice Stephen Breyer (center right) walk together on the Harvard Law School campus during the school’s bicentennial celebration last week. (AP/Steven Senne)

Many political scientists use quantitative methods. We analyze data with statistical tools and techniques, and most of the time, this practice is uncontroversial. However, quantitative political science has recently become politically contentious, thanks in part to a case on political redistricting in front of the Supreme Court, Gill v. Whitford. The plaintiffs argue against a redistricting scheme that they think goes too far in favoring one political party over the other. They propose a relatively simple and straightforward quantitative measure could be used by the Court to see whether a redistricting plan is unfairly partisan. However, in oral arguments, Chief Justice John Roberts got fed up with all that math stuff and dismissed it as “sociological gobbledygook” without providing further explanation.

Unfortunately, this is not the first time the Supreme Court or even Roberts has categorically dismissed relevant quantitative evidence. In a recent article in the Journal of Empirical Legal Studies, Ryan Enos, Christopher Havasy, and I document and analyze many cases where quantitative evidence has been dismissed on bogus grounds. In particular, we focus on a common mistake that we call the negative effect fallacy. Judges often dismiss quantitative evidence by claiming that it is never easy to prove a negative. Here’s where they go wrong.

The word ‘negative’ has multiple meanings.

Most people are familiar with the adage “you can’t prove a negative” or some close variant. This makes sense when the word negative is used philosophically, referring to the absence of something. For example, proving that President Trump exists is easy, but proving that Santa Claus does not exist is hard. The adage can be misleading — particularly because positive statements can always be rewritten as negative, and vice versa — but it reminds us that induction doesn’t produce certain conclusions. For example, just because the sun has risen every day thus far, we can’t be 100 percent certain it will rise tomorrow.

To the extent that the prove-a-negative adage is useful, it clearly doesn’t apply to the arithmetical definition of the word negative, which more or less refers to a number that has a minus sign in front of it. Many social scientific studies try to figure out whether something affects something else — for example, whether divided government influences legislative productivity. A study might find that divided government increases legislative productivity, and we could call this a positive effect. Alternatively, a study could find that divided government decreases legislative productivity, and we could call this a negative effect.

For the purposes of this discussion, the philosophical and arithmetical versions of the word negative have nothing to do with each other. When someone says that you cannot prove a negative, they are making a philosophical claim that you cannot easily prove that something (such as Santa Claus) does not exist. When social scientists (or others) estimate a negative effect, they are making an arithmetic claim that an effect of interest is less than zero. There is no reason to think that proving a negative in the latter sense is any harder than proving a positive. One would use the same statistical techniques to reach either conclusion, and if the test is a good one, the result could go in either direction depending on the true sign of the effect being studied.

Many judgesincluding Chief Justice Robertsdon’t understand this.

Sadly, judges often conflate the philosophical and arithmetical definitions of the word negative. Consider another recent election law case, Arizona Free Enterprise v. Bennett. The state provided money to publicly funded candidates to match private contributions. The plaintiffs wanted to strike this policy down, claiming that it had negative effects on private political speech. A team of political scientists used quantitative techniques to assess this claim (see here and here), and they found no evidence to support it. Roberts, writing the majority opinion, briefly mentioned these findings but dismissed them, declaring “it is never easy to prove a negative.” His argument appears to be that it was inherently hard for the plaintiffs to show that the state’s matching scheme had negative consequences for free speech and that we should, therefore, ignore the statistical evidence that found no relationship. However, there’s no reason that negative effects are harder to quantitatively detect than positive ones, and Roberts’s statement is based on a blatant logical fallacy.

This is a common mistake.

The negative effect fallacy is not unique to Arizona Free Enterprise or election law. This mistake appears to have originated in 1960. It has been repeated many times by the Supreme Court and lower courts across many legal domains including criminal law, free speech, voting rights and campaign finance. Our investigation explains the negative effect fallacy in more detail, documents its use in federal courts, and provides recommendations regarding the use of quantitative evidence in court decisions.

The troubling conclusion of this analysis, coupled with additional anecdotes like the “gobbledygook” assertion, is that the Supreme Court regularly dismisses relevant quantitative evidence based on vacuous statements, gut reactions, and logical fallacies. This is worrying for several reasons. Evidence that might have led to better decisions is regularly ignored. Even when these fallacies and gut reactions have little effect on the Court’s decisions, they hide the true rationale behind judicial decisions, undermining transparency and hindering the ability of future citizens, litigators and judges to navigate and interpret the law.

Supreme Court justices should be among the nation’s leaders in evaluating evidence and thinking logically. They should be setting good examples and pushing social scientific discourse forward, but in some cases, they do the exact opposite.

It is hard to know why the greatest legal minds in our nation fall prey to a logical fallacy like the negative effect fallacy and insist that relatively simple quantitative methods are gobbledygook. Perhaps they don’t understand the evidence, in which case we can eventually solve this problem with increased quantitative training in law schools or independent quantitative experts tasked with the job of assisting the justices. Another explanation is that they don’t like the quantitative results, don’t have any other good arguments against them, and just want a convenient way to dismiss the evidence. Political scientists studying courts often conclude that judicial decisions are driven more by the judges’ ideologies rather than the evidence or legal principles (e.g., here), and if this is right, fallacious but scientific-sounding arguments give judges the leeway they would need.

A more charitable interpretation of the negative effect fallacy and similar tactics is that judges have good reasons for dismissing evidence but they fail to articulate those reasons. A judge might reasonably argue that empirical research on the effects of laws is difficult. It requires estimating what would have happened in some counterfactual scenario that we can never observe. Any quantitative analysis rests on assumptions, and researchers within a particular field often disagree amongst themselves about what assumptions, methods and results are defensible. Therefore, if a judge is not an expert on quantitative analysis, she may have little choice but to largely ignore the evidence and revert to her intuition, ideology and principles.

Whether judges like it or not, quantitative evidence is not going away. Despite its limitations and imperfections, quantitative data analysis is often the best way to answer difficult empirical questions. Does a new voting technology have a greater impact on a particular racial group? Does a campaign finance regulation restrict free speech? Does a state’s redistricting plan deviate from a reasonable standard of fairness? Courts will have to consider many such questions like these going forward, and social scientists will continue to collect data and develop methods to help them. If judges want to make informed decisions in these cases, they will have no choice but to engage with quantitative evidence and evaluate it on its merits.

Anthony Fowler (anthony.fowler@uchicago.edu) is Associate Professor in the Harris School of Public Policy at the University of Chicago. He applies econometric methods for causal inference to questions in political science, with particular emphasis on elections and political representation.

This article is one in a series supported by the MacArthur Foundation Research Network on Opening Governance that seeks to work collaboratively to increase our understanding of how to design more effective and legitimate democratic institutions using new technologies and new methods. Neither the MacArthur Foundation nor the Network is responsible for the article’s specific content. Other posts in the series can be found here.