Home > News > Who’s afraid of killer robots? (and why)
158 views 12 min 0 Comment

Who’s afraid of killer robots? (and why)

- May 30, 2014

Earlier this month nations gathered in Geneva to commence talks on the ethical issues associated with the development and deployment of autonomous weapons – weapons with the ability to select, acquire and destroy targets without the input of a human operator. Proponents of such systems made arguments on strategic and ethical grounds and claimed existing law was sufficient to govern such weapons. Opponents questioned the ability of such systems to comply with the laws of war. They also invoked deeper ethical considerations such as the human right not to be killed by a machine.
One key point of the debate was the relevance of the “Martens Clause” in the Hague Conventions, which enjoins states to consider the “principles of humanity” and the “dictates of the public conscience” in considering the lawfulness of means and methods of combat not explicitly prohibited, including new weapons. The global NGO coalition in favor of banning these weapons argues that the Martens clause mitigates against these weapons, given widespread public concern. Opponents of a ban say the “public conscience” is an inappropriate barometer because public opinion can be so fickle, so divided, so easily manipulated, and most importantly that it might rest less on moral principles than on self-interest or irrational fears born of “hype.”
I attended the conference as a consultant to a humanitarian disarmament NGO associated with the global coalition, Article36, to present remarks at a side event. As a social scientist, my goal was not to take a position in the debate about the value of the Martens Clause. Rather, I argued the to the extent it was said to be relevant, the “public conscience” could and should be operationalized and measured empirically – and that extent to which the public conscience is driven by “principles of humanity” could and should be treated as a testable hypothesis. To that end, I presented polling data collected last year as part of YouGov’s Omnibus survey. One thousand Americans were asked their opinions on the potential deployment of autonomous weapons. They were asked whether they would support such policy on a five-point scale. They were also asked to explain their answers in open-ended comments.
As I described at the Experts’ Conference and have previously noted at Open Democracy, a majority of Americans across the political spectrum oppose such weapons, with “strong opposition” the largest single category. Many are unsure, but those who are unsure favor a precautionary principle against such technology. Both women and men are likely to oppose autonomous weapons, though women are likelier than men to say they don’t know what they think. Opposition to autonomous weapons is predicted by age, education and interest in news and public affairs. Members of the military and veterans, as well as their families, oppose autonomous weapons to an even greater extent than the US civilian population (though families of active duty service-persons are more likely to support autonomous weapons than are the service-personnel themselves).
However, the most interesting part of the survey results for the Martens Clause debate is not the descriptive statistics but rather the open-ended comments. In short, on what basis did respondents support or oppose autonomous weapons? What moral principles, if any, underlay arguments for and against such weapons? In short, to what extent did the “dictates of the public conscience” rest on the “principles of humanity” rather than on self-interest, hyped up fears of a robot uprising, or other pedestrian concerns? And is principled reasoning equally distributed between supporters and opponents of such technology?
The first item of note is that while both camps prioritize saving lives, humanitarian thinking per se is largely absent from explanations for opinions in favor of autonomous weapons. Rather, proponents of such weapons unflaggingly invoke national self-interest: the need to protect “our troops” from harm or “our national security” from robot arms races – arguments invoked as well by analysts and lawyers advocating such weapons. Only a small proportion of AWS proponents surveyed qualify this statement with concern for foreign civilians. And there is almost no sense among the U.S. public that autonomous weapons might actually be a viable means of reducing war crimes against foreign civilians – though this is a moral argument made by some proponents of AWS and, according to Zack Beauchamp, perhaps the most important question in the debate. Most arguments in favor of AWS by American voters are interest-based arguments based on the hope of saving American lives (though notably active-duty personnel in the survey did not agree with this thinking).
Arguments for supporting autonomous weapons systems based on a survey of 1,000 Americans. Tag clouds are frequency distributions of coded text (as is the table). The boxes are examples of passages of text that received the different codes.
Some opponents of autonomous weapons also make practical arguments based on self-interest or national security. For example they are worried about terrorists hacking into such weapons, the possibility that technology would malfunction or backfire, or the potential for them to be used against Americans by a future tyrannical government. A very small number also cite dystopian fears of a robot uprising. But the vast majority cite a range of complex and nuanced moral and humanitarian concerns as reasons to avoid outsourcing kill decisions to machines even if it were in states’ short-term self-interest.  Three of the four most common tags for open-ended comments opposing autonomous weapons were “humanity,” “moral conscience,” “civilians.”

 Arguments for opposing automated weapons systems based on survey of 1,000 Americans. Tag clouds are frequency distributions of coded text (as is the table). The boxes are examples of passages of text that received the different codes.

Arguments against autonomous weapons systems based on a survey of 1,000 Americans. Tag clouds are frequency distributions of coded text (as is the table). The boxes are examples of passages of text that received the different codes.


 
Respondents opposing AWS repeatedly stated the principle that “a human must remain in the loop.” For example: “Humans can make decisions and think critically. Robots can’t understand what they’re doing.” “When human lives are in the cross-fire, people should never be taken out of the decision-making loop.” “The problems that arise require human reasoning.” The kind of “human nationalism” evident in these comments echoes campaigners’ narrative that human judgment is uniquely suited to making ethical decisions and that on principle these decisions should not be outsourced to machines. It also resonates with the global coalition’ emphasis on the concept of “meaningful human control.”
According to respondents, the key human quality machines would presumably lack would be a moral conscience. Respondents repeatedly characterized judgment, empathy and moral reasoning as uniquely human traits. For example: “Killing should be subject to conscience which is an attribute machines lack.” “I do not believe in removing empathy or moral action in conflicts. A person knows they are hurting others.” “Robots cannot make moral decisions. Once in combat, robots will only make decisions based on what will promote victory. Sometimes in war it is better to lose ground but save one’s soul.”
Like AWS proponents, critics of such technologies were concerned about the loss of life, but rather than focusing on protecting U.S. soldiers they focused on the potential harm to foreign non-combatants. This exemplifies the fears of NGOs that autonomous weapons could not comply with principles of discrimination and proportionality and could pose a risk to civilians. There is no evidence that Americans believe the reverse argument: that autonomous weapons might reduce war crimes by eliminating negative human emotions from the battlefield. Another key concern is moral accountability. One respondent wrote: “I feel that removing the human element is wrong. If you can’t have a person on the other end watching the damage and destruction you are basically washing your hands of the pain that you are causing real people.”
Finally, a portion of responses evidenced what Professor Peter Asaro called in his remarks at the United Nations the “ugh factor”: a visceral sense that such a policy would be “just wrong.” This set of comments would seem to be indicators of what treaty drafters meant by a sense that the public conscience might be “shocked” at an idea. Many people stated that you “can’t trust machines.” Others expressed a sense of fear, terror, disgust or alarm at the idea. “The whole concept is terrifying.” Another said: “It’s creepy and inhumane.” “It’s sick.”
These data are limited to Americans’ views due to the limited scope of this preliminary survey. Of course, Americans do not speak for the rest of the world nor should their views be uniquely weighted in this important ethical debate. But this initial data shows the “public conscience” can indeed be measured and coded, whether in the U.S. or cross-nationally. In America, at least, those thinking in terms of morality and conscience, including uniformed personnel, are unenthusiastic to say the least – morally “shocked” might be more accurate – about the idea of armed autonomous robots. And far from being based on “hype,” these positions are based on deeply moral arguments in which the principle of “humanity” and human control of lethal technology is paramount.
Charli Carpenter is Professor of Political Science at University of Massachusetts-Amherst and blogs at Duck of Minerva. She is the author of ‘Lost’ Causes: Agenda Vetting in Global Policy Networks and the Shaping of Human Security.

Topics on this page