Home > News > On the ethics of Facebook experiments
128 views 9 min 0 Comment

On the ethics of Facebook experiments

- July 3, 2014

This January 30, 2014 file photo taken in Washington,DC, shows the splash page for the social media internet site Facebook. AFP PHOTO / Karen BLEIERKAREN BLEIER/AFP/Getty Images
Joshua Tucker: The following is a guest post from University of North Carolina political scientist Timothy J. Ryan.
*****
Facebook found itself in the hot seat once again this week following the publication of a study that experimentally manipulated the content of more than 600,000 users’ newsfeeds. The study finds that increasing positive content in users’ newsfeeds makes them post more positive content themselves. Likewise, increasing the amount of negative content a user sees increases the number of negative posts.
The new study raised ethical concerns. (See here, here, here, and here.) In particular, commentators have objected that:

  1. Facebook did not, except perhaps via an oblique reference in its terms of service, obtain informed consent from the users who were in the study
  2. Its experiment might have caused users to feel emotional distress.

Some of the commentary has been alarmist. Writing in the New York Times, for instance, Jaron Lanier suggests that the Facebook study might have put someone’s suicidal leanings over the brink. He also equates the research to a pharmaceutical firm sneaking a drug into consumers’ drinks—a misleading analogy, as I explain below.
These are important concerns. With more and more tools to conduct experiments on a mass scale, it is worth having a conversation about how to protect subjects’ welfare. However, because the backlash has the potential to raise new obstacles to human subjects research, I wish to highlight three considerations that have been overlooked in the conversation about what Facebook did.
First, businesses conduct randomized tests similar to Facebook’s all the time. In the industry, it’s called A/B testing and is done with an eye toward increasing traffic, customer satisfaction or some other outcome. Google makes a popular application that lets almost anyone with a Web site do it. Facebook’s test is receiving scrutiny because of a seemingly noble additional step: It made the results of its proprietary research public. As University of Michigan economist Justin Wolfers tweeted Wednesday, this heightened scrutiny for academic, compared to purely commercial research, creates a perverse incentive that pushes against publishing results for public benefit.
Second, there is a perception that all human subjects research requires informed consent, and that the university boards that approved the study were remiss not to require one. (See James Grimmelmann’s remark here). But this is inaccurate. In fact, consent requirements are calibrated to the potential risks and benefits of the study. (Michelle Meyer has an excellent post on the regulatory details here.) This flexibility is a good thing. Some important studies could never be carried out with a consent form that alerted subjects that they were being studied. One example is Robert Cialdini’s research, which tests how various nudges (e.g. a smiley face on an electric bill as a reward for lowering energy usage) encourage energy conservation, national park protection, and other pro-social behaviors. Within political science, Donald Green, Alan Gerber, and many others have conducted hundreds of field experiments on how to increase voter turnout, many of which could never have been run with an informed consent requirement.
What of balancing risks and benefits in the Facebook study? Judging by some reactions to the study, one might think that the “negativity” treatment was jarringly dark or disturbing. In fact, Facebook raised or lowered the probability of potential newsfeed items being displayed to a user, based on whether the item contained positive words (e.g. love, nice, sweet) or negative words (e.g. hurt, ugly, nasty). (The exact list of words used is proprietary, but there are some details here.) I think this approach would place the Facebook study comfortably in the lowest risk category — studies where “the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life.” Can we rule out the possibility that the study tipped someone, somewhere to commit suicide, as Jaron Lanier worries? We cannot, but the standard is not zero risk—there is no such thing—but rather mundane, everyday risk.
There is a good reason to allow at least some risk in human subjects research: The research generates benefits. In the case of the new Facebook study, it helps researchers understand how social comparisons influence people’s happiness. Where previous studies had suggested that seeing happy friends makes people sad (because they feel envious), the Facebook study adds an important counterpoint. Future work will have to untangle this apparent inconsistency, which we would never know about if the results had not been made public.
Third, it is important to remember that Institutional Review Boards are not the only constraint that applies to research. There are also legal constraints. This is what is inapt about Lanier’s “drug-in-drink” analogy. Putting aside the complex rules that regulate human subjects research, a person who ran a “drug-in-drink” study could be prosecuted for breaking the law. To my knowledge, nobody is arguing that Facebook broke any laws.
I am not an impartial participant in this discussion. I ran my own emotion-manipulating Facebook studies — related to political-advertising — years ago. (They generated no outcry that I heard about.) I purchased advertisements on the Web site and examined how the emotional content of the ads influenced click-through rates — the sort of research that nobody would bat an eye at if it were being done purely for business purposes. The work did not solicit informed consent from the subjects. (There were more than 4 million of them, so that would have been difficult, to put it mildly.) Nevertheless, it was approved by an Institutional Review Board. In addition to some of the factors above, my application for Institutional Review Board approval noted that Facebook users have an expectation that information on whether or not they click an ad goes to the ad purchaser. Other social scientists have fielded manipulations on Facebook without informed consent, too—see here and here—and we have learned from this research.
Subjects’ well-being should be at the forefront of any researcher’s mind, whether the research is academic, commercial or a hybrid. But we should pay attention to both risks and benefits, and we should think twice when academic researchers, who generate research for public consumption, are held to a higher standard than companies conducting research to help their bottom line.