Home > News > How to make field experiments more ethical
221 views 20 min 0 Comment

How to make field experiments more ethical

- November 2, 2014

Istockphoto
Social science researchers are increasingly using field experimental methods to try to answer all kinds of questions about political processes and public policies. Questions like how to combat corruption, how to reduce prejudice and discrimination, how to render government more accountable, how to reduce violence against women, how to ensure that development aid is well spent. Unlike traditional “observational” methods, in which you observe the world as it comes to you, the idea right at the heart of the experimental approach is that you learn about the world by seeing how it reacts to interventions.
Recent concerns over the ethics of a political science field experiment in Montana have led to a lot of questioning inside and outside the discipline about the ethics of field experimentation. In thinking about the issues involved in this and other experiments, there are two key points.
First, social science researchers rely on principles developed by health researchers that do not always do the work asked of them. These principles of respect for persons, justice and beneficence have provided a touchstone for social scientists. But because of differences in the nature of what is studied, and the nature of relations between researcher and subject, the standards developed by health researchers do not always seem well suited for social scientists. Unlike many health scientists, social scientists are commonly working on problems in which:

  1. Those most likely to be harmed by an intervention are not the subjects (for example, when researchers are interested in the behavior of bureaucrats whose decisions affects citizens, or in the behavior of pivotal voters, which in turn can affect the outcome of elections).
  2. Researchers are interested in the behavior of institutions or groups, whether governmental, private sector or nongovernmental, and do not require information about individuals (for example, if you want to figure out if a government licensing agency processes applications faster from high-caste applicants than from low-caste applicants).
  3. Subjects are not potential beneficiaries of the research and may even oppose it (for exampl,e for studies of interventions seeking to reduce corruption in which the corrupt bureaucrats are the subjects).
  4. Consent processes can compromise the research (for example, for studies that seek to measure gender- or race-based discrimination of landlords or employers).
  5. There is disagreement over whether the outcomes are valuable (compare finding a cure for a disease to finding out that patronage politics is an effective electoral strategy).

These five features can sometimes make the standard procedures used by Institutional Review Boards for approving social science research irrelevant or unworkable.
The first two differences mean that formal reviews, as currently set up, can ignore the full range of benefits and harms of research or do not cover the research at all.  Formal reviews focus on human subjects: living individuals about whom investigators obtain data through intervention or interaction or obtain identifiable private information.
The third and fourth, which again focus on subjects rather than broader populations, can quickly put the principles of justice and respect for persons — two of the core principles elaborated in the Belmont report (upon which standard review processes are based) — at odds with research that otherwise may seem justifiable on other grounds.
The fifth difference can make the third Belmont principle, beneficence, unworkable, at least in the absence of some formula for comparing the benefits to some against the costs for others. This leaves researchers in a difficult situation, at least if they care about the ethical implications of their research designs above and beyond whether they receive formal research approval.
So how to determine in practice whether ethical lines are crossed (whether or not research gets formal approval from Institutional Review Boards)? By the question I don’t mean what is in some sense the objectively right or wrong way to behave, but simply what behavior meets the standards that we would like the public to expect of us as researchers, and that we, as researchers, would like to be able to expect of each other.  These expectations should reflect a combination of what behavior we feel morally comfortable with and expectations that will make it possible to do our work well. This is the idea of professional ethics.
At the moment, those expectations are not clear. In health research the expectations were formed through a joint deliberation with health practitioners and representatives of the general public. Social scientists have had no such process.
But we can make progress. This leads to the second point: Although there are no hard and fast rules, there are principles that can guide the ethical implementation of field experiments. If we take it as given that research should not break the law and that conflicts of interest should be appropriately addressed, I think it is useful to think through the following four questions in this order :

  • Question 1 [Agency]: Is the researcher responsible for interventions that manipulate people?
  • Question 2 [Consent]: Is it done without their consent?
  • Question 3 [No Harm]: Is it likely to bring harm to some people?
  • Question 4 [Net Benefits]: Do possible harms outweigh the benefits?

Ideally you would want to answer no to all four of these; if you answer yes to all of these, you’re in trouble. The tricky cases are when you can answer yes to some but no to others. What then? What combinations of answers to these questions can be used to justify a research design?
Let’s take the questions in order.
Agency. In classic health and agricultural experiments, the researcher sets up and implements the intervention, often as a trial. But in many social science field experiments (or randomized interventions, or policy experiments), the intervention is initiated and implemented by someone else, such as a government, an NGO or a political party.
In these cases, researchers might advise on ways that an intervention is implemented to maximize learning, but the responsibility for the intervention is borne by some other actor. In some cases, these partnerships are essential — for example, if an intervention can be implemented only by a government or a large organization. Working with partners can have the advantage of increasing the realism and relevance of the intervention, ensuring that you are not just studying yourself (though, conversely, it also can limit the types of questions that can be answered).
But even if they are not critical for implementation, partnerships can simplify the ethics. The decision to implement is taken not by the researcher but by an actor better equipped to assess risks and to respond to adverse outcomes. Real risks may be reduced, but so might risks to the profession arising from public fears that researchers are using people as guinea pigs. In these cases, while there is learning, there are no guinea pigs, since the research follows the intervention rather than the other way round.
Obviously, the partnership approach doesn’t help much if the partner organization itself acts unethically, or if the partnership is just a front for the researcher who swaps back and forth between wearing the hats of a practitioner or a researcher. Moreover, partnering can raise new ethical issues, if, for example, by doing so, researchers lend legitimacy to organizations or governments that are themselves involved in corrupt or abusive practices. The general take-away is that if through partnerships you can answer no to Question 1 then things can get a lot easier. The specific question that needs clarification is when it is appropriate to partner with some third party or other.
Consent. The emphasis on consent comes from a concern with respect for persons. The goal is to minimize interference with the autonomy of others. But consent, like partnerships, also provides a way to share responsibility with others affected by the research.
If everyone consents then it is hard to mount an ethical challenge. Moreover, consent provides information regarding harm from the subject’s perspective. Conversely, when deception is used, this may damage relations of trust between a research community and the public, which can both weaken the quality of the research and make it harder for others to do good research in the future. Gains from consent can be compromised, however, if consent is less than fully informed or is coerced or is extracted from vulnerable populations or if deception is used.
The tricky issue is that in some social science research it is ambiguous whose consent is needed. If an intervention seeks to reduce violence against women, consent may be sought from the women but not from the (directly affected) aggressors. If consent is obtained from voters to receive truthful information about corruption by political candidates, is the consent of those (indirectly affected) political candidates also required before this information is handed out? Should they have a veto?
Again, the main point here is that you want a no to Question 2 whenever possible. The more specific question that needs clarification, though, is when, if ever, is it appropriate to bypass consent procedures for some people that are affected by social scientific research.
No harm and net benefits. If you have already run into problems with the agency and consent principles, then everything comes down to the no harm and net benefits principles. But these are hard ones.
Unlike much health research, politics is about winners and losers, and so a lot of political interventions create losers, whether they are subjects or third parties. For instance, an intervention that succeeds in increasing the participation of the poor in politics may weaken the position of the rich. If the intervention does no harm to anyone, then perhaps there is not much at stake — even if it is done without consent.
Of course, weak interventions are less likely to do harm, but the point of the interventions is to try to observe substantively significant effects (see here also). So often there is at least some probability that a strong political intervention will do harm to someone somewhere. The issue here is whether you are willing to do it for research reasons alone.
Perhaps you might if you can answer Question 4 and satisfy the net benefits principle. But for social and political applications you might have a very hard time answering this question and a harder time justifying any answer you give or explaining on what grounds you can even start making an answer (see, for example, the huge body of work on this following Arrow’s contributions). Figuring out principles for determining net benefits in the face of value disagreements will be a hard challenge for the discipline.
Taken together, these four principles are more nuanced than simply telling researchers to “leave no trace” on political outcomes or to ensure only that they break no laws. Experiments must leave traces. The whole point of field experiments is to change the world, even if it is often in small ways.
If one ruled out all research that changed the world one would be ruling out trials that assess whether drugs work, how best to help kids learn and how to use aid effectively. These all use interventions. The real ethical challenge is to figure out when it is defensible to change the world. An especially difficult issue for political scientists is how to think about research that can affect electoral outcomes (though the problem is much more general than this).
Some wonder why researchers should worry about affecting elections at all since so many other groups try to do just that all the time; aren’t researchers citizens, too? The last of these positions seems to miss the point of research ethics — which is to specify what behavior researchers can expect of each other and the public can expect of researchers beyond what they are prevented from doing by law.
Others argue that studies should never have effects on real elections, as these are core to democratic practice. Others that research should engage in only elections in which outcomes are not likely to be affected – or, more precisely, where the electoral outcomes are unlikely to change even if the votes of some individuals do. These positions are focused on the challenges of determining harm and net benefits rather than focusing on the agency and consent principles. But thinking about agency and consent might sometimes justify less conservative positions.
For example, it is often possible to use a design to study someone else’s meddling in elections instead of meddling yourself. There are a lot of people meddling in elections – or simply participating – often at a much larger scale than the meddling of researchers, and as a researcher you might be more interested in the effects of their actions than the effects of yours. Partnerships between researchers and political parties or campaigns can, in these cases, be of mutual benefit. In these cases, election outcomes might be affected, but it is the political actors that do the meddling while the researchers figure out how the meddling matters.
Researchers could also design interventions to be consistent with the consent principle: Interventions that seek to inject expert information into public debate around elections can sometimes be implemented using an entirely transparent design, with consent and no deception. Voters could be told that researchers are interested in understanding how access to this information generated by researchers will affect people’s decision to vote (or whatever the exact hypothesis is). See Ben Lauderdale on this here. Again the outcomes might change, but any change is only because of decisions of informed voters.
As field experiments become more common, instances of excellent research and of ethically troubling research will come and go. The conversation the social sciences need now is not about these individual studies but about the broader lack of clarity around ethical issues and the disconnect between formal university approval processes and the ethical issues that researchers and citizens care about.
If the current system allows for research that breaks the trust of the public, or if it inappropriately hinders valuable research, then it needs fixing. In its place there could be a process that is led by university administrators worried about lawsuits, or one led by a federal government worried about abuses of liberties. But it would be better if it were an open process led by researchers seeking basic standards for research that we can expect of one another and ask the public to expect of us.
Until such agreement is reached, here’s my advice for researchers. Work through partnerships when you can. If you cannot and if instead you implement an intervention that can do harm to someone, then get consent from affected parties — not just human subjects — instead of trying to make an argument about net benefits. If getting consent compromises the research, then change topic.
Macartan Humphreys is a professor of political science at Columbia University.

Topics on this page