Home > News > Was the Facebook emotion experiment unethical?
144 views 7 min 0 Comment

Was the Facebook emotion experiment unethical?

- July 1, 2014

( Karen Bleir/AFP/Getty Images)
There has been lots of news coverage of Facebook’s manipulation of newsfeed content to alter the balance of positive or negative words and the accompanying academic paper that was published in the Proceedings of the National Academy of Sciences. As two researchers who study emotions, we have watched the fallout from this study with great interest.
While many critics have wondered whether the study’s researchers had approval from a university Institutional Review Board (IRB), the watchdog organization that oversees faculty research that involves human subjects (the short answer is sort of), this focus is misguided. IRBs are not guardians of ethics; they are protectors of institutional legal liability. Many academics can attest to the fact that IRBs vary by institution. Some proposals sail through one institution and take months at another. One should not confuse the IRB process with ethical deliberation.
If we can agree that evaluating the Facebook research isn’t as simple as determining whether they had IRB approval, then what else might we need to consider?
Was there consent?
Facebook argues that users offer consent when they sign up and agree to the data use policy (although it underwent changes after data collection). But Facebook could have offered users a clearer opt-in message, asking them to volunteer for a study. The incredible number of users gives Facebook an enviable pool from which to draw, and just imagine the incentives they could offer (2 weeks with no ads upon completions? Never, ever having to see the office yoga pants ad again?). Sure, the people who agree to participate might be unlike those who don’t in important ways, but the researchers don’t claim a representative sample anyway. They are interested in the differences between treatment groups and a control group. Facebook could have recruited a large sample of their users who offered more explicit consent for participation in studies and that would alleviate much of the grief they now face.
Was there deception?
Facebook assumes that we as users know that they are continually tinkering with our feeds, so that manipulating content to feature a steady stream of positive or negative news is normal business for them. The response to the study suggests that people at least felt deceived. We have deceived subjects in several of our studies: We tinker with the content of political ads to make them more or less threatening (with music and visuals) and we alter the content of mock news stories. Is lying to subjects unethical? Not necessarily. But we debrief, meaning that we tell the subjects what the deception was and explain the purpose of the study. Facebook study participants were never debriefed. Their feeds just went back to normal, whatever that is. One fairly simple action on the part of Facebook would have been to send e-mail explaining to participants their involvement in the study and the purpose behind it. Again, this would have been an easy way to reduce some of the public backlash
Did the Facebook study cause harm?
We don’t know. The authors claim that they increased negative emotions or positive emotions, and that reducing the emotional content of your feed made people post less. The discussion of harm points to a broader critique of the research.
Sometimes studies turn out to have ethical issues researchers don’t anticipate. In one of our early studies (flawed in several respects), we did a manipulation where we asked subjects to either describe a day that made them anxious (treatment condition), or describe a typical day (control condition). In the treatment condition we anticipated fairly innocuous undergraduate worries – missing a bus, reading the wrong chapter for class. But one of our subjects wrote a fairly vivid account his or her most harrowing day while serving in Iraq. In retrospect, this was shortsighted of us. We could have encountered stories of violence, self-harm or sexual assault, and we did not think through these possibilities.
Some critics of the Facebook experiments invoked examples of people who within days of the experiment committed suicide due to cyber-bullying. Was Facebook responsible for this? This seems unlikely, but the Facebook study researchers should have considered what emotions and actions their manipulations might trigger in participants. Also, debriefing can help people understand what was real and put their reactions in context.
Did the study actually produce the results the authors said it did?
People whose feeds had more positive words also posted more positive words, and people whose feeds had more negative words also posted more negative words. Did they really shift people’s emotional experiences as the author’s claim? It’s unclear. They also claim that “emotional contagion” led “people to experience the same emotions without their awareness.”
This strikes us as a particularly overstated claim. People didn’t notice that their feeds became more positive or negative? There’s no evidence on this point. Their claims go far beyond the data so that in addition to being ethically questionable, the authors overstate their findings.
Political scientists Bethany Albertson (University of Texas at Austin) and Shana Gadarian (Syracuse University) are authors of a forthcoming book, “Anxious Politics: Democratic Citizenship in a Threatening World.” Their book includes 14 studies, many of which include experimental manipulations of anxiety and cover emotionally laden topics such as immigration and terrorism.