Home > News > Why Facebook really, really doesn’t want to discourage extremism
177 views 7 min 0 Comment

Why Facebook really, really doesn’t want to discourage extremism

Our research finds outrage is what goes viral — and that makes money

- July 13, 2021

Last year, the Wall Street Journal reported Facebook executives allegedly shut down internal research showing the platform increased political polarization and declined to make changes that might make the platform less divisive.

Why might Facebook be reluctant to reduce polarization on its platform? Our study, recently published in the Proceedings of the National Academy of Sciences, might offer an answer.

Polarizing posts are more likely to go viral

We analyzed nearly 3 million U.S.-based tweets and Facebook posts to examine what social media posts that go “viral” have in common. We specifically looked at political posts, including those by members of Congress or left- and right-leaning media outlets. The results were stark.

The most viral posts tended to be about the opposing political party. Facebook posts and tweets about one’s political out-group (that is, the party opposed to one’s own) were shared about twice as often as those about one’s own political group. Posts about the opposition were almost exclusively negative.

Furthermore, each additional word about the opposing group (for instance, in a Republican’s post, that might include “Democrat,” “leftist,” or “Biden”) in a social media post was associated with a 67 percent increase in social media shares. These effects were much larger than other factors known to increase sharing, such as negative words (“sad” or “mad”) or moral-emotional words (such as “evil” or “hate”). The increase occurred whether posts were from Republicans or Democrats or whether they were shared through either Facebook and Twitter.

Yes, Facebook’s Oversight Board upheld Trump’s suspension. But here’s the bigger issue.

These posts inspired mockery and condemnation

For instance, two of the most viral posts in our data set were “Every American needs to see Joe Biden’s latest brain freeze” and “Vice President Mike Pence blatantly lied to reporters about the trajectory of COVID-19 cases in Oklahoma.”

Such posts about the out-group were very likely to receive “angry” reactions on Facebook, as well as mockery (“haha”), comments and shares. Posts about the political in-group did not receive this viral boost, although they were slightly more likely to receive “love” and “like” reactions.

In other words, antagonism toward the opposition was much more likely to go viral than praise or pride for one’s own political team. Indeed, out of all six of Facebook’s possible reactions to a post, in our data, “angry” was most commonly used. This suggests a regularly stoked sense of outrage might be fueling political polarization and animosity.

Why the GOP can’t quit Trump

U.S. politics is increasingly driven by hate, not solidarity

Our findings may reflect the fact that, more and more, political identities are driven by hating the opposition more than loving one’s own party. Out-party hate has been increasing steadily over the past few decades, researchers find, and is at the highest level seen in 40 years. Out-group hate is also more strongly related to whom we vote for than in-party love. In much the same way, who we hate captures more attention online than who we love.

The assault on the U.S. Capitol opened a new chapter in domestic terrorism.

Social media is creating perverse incentives for polarizing content

In a recent detailed article, technology writer Karen Hao described how Facebook’s “content-recommendation models” promote “posts, news, and groups to users in an effort to maximize engagement, rewarding extremist content and contributing to increasingly fractured political discourse.” Since out-group animosity is very likely to go viral, a social media business model that tries to keep us engaged so it can sell advertising ends up rewarding politicians, brands and hyperpartisan media companies for attacking their enemies.

More important, with this business model, social media companies will be unlikely to find ways to reduce animosity on their platforms.

For example, in November, the New York Times reported that Facebook declined to make permanent a tweak to the news feed algorithm that showed less harmful content in the feed. Why? It reduced the number of times people opened the app.

Such decisions might be helping Facebook’s bottom line: Despite years of controversy, Facebook recently reached a $1 trillion market value.

Facebook also recently denied a problem even exists, and has come out with an employee “playbook” of how to respond to accusations that it polarizes discussions online. The talking points include the claim that there is little evidence that Facebook causes polarization, though a recent controlled experiment found a significant reduction in political polarization when Americans log off Facebook for a month.

Facebook has further argued that polarization is not always bad and sometimes leads to positive outcomes, such as the civil rights movement’s success in expanding voting rights in the 1960s.

But despite that argument, algorithmically amplifying out-group hate could damage society and democracy. Such content may increase engagement and boost profit in the short term, but research finds people say they do not want politicians to express out-party animus. What’s more, the storming of Capitol Hill on Jan. 6 suggests polarizing rhetoric and misinformation on social media can inspire real-world political violence.

However, so long as polarizing content is a fundamental part of the social media business model, it’s unlikely social media platforms will see it as a problem, much less solve it.

Don’t miss any of TMC’s smart analysis! Sign up for our newsletter.

Steve Rathje (@steverathje2) is a PhD candidate at the University of Cambridge.

Jay Van Bavel (@jayvanbavel) is an associate professor of psychology and neural science at New York University.

Sander van der Linden (@Sander_vdLinden) is professor of social psychology in society at the University of Cambridge.