Home > News > Why do Facebook and Twitter’s anti-extremist guidelines allow right-wingers more freedom than Islamists?
140 views 9 min 0 Comment

Why do Facebook and Twitter’s anti-extremist guidelines allow right-wingers more freedom than Islamists?

Relying on governments to designate organizations as terrorists means “de-platforming” is always political.

- July 31, 2019

When the Islamic State started to use social media heavily a few years ago, big platform companies such as Facebook and Twitter responded with efforts to track and remove its content. Now politicians are calling on social media companies to use those tools to regulate all kinds of terrorist content. Social media companies’ responses have been uneven. Many observers question why these companies appear to tolerate content from violent far-right organizations. At the TED 2019 conference, moderator Chris Anderson asked Twitter CEO Jack Dorsey, “A lot of people [are] puzzled why, like, how hard is it to get rid of Nazis from Twitter?”

The answer may have more to do with government counterterrorism priorities than with social media companies’ own policies. Social media companies often default to governments’ understandings of who should and should not be considered a terrorist threat.

Who’s winning the Democratic debates? Here’s what Google trends can’t tell you.

Here’s how Facebook and Twitter define terrorist content that they will monitor and regulate.

Over the past several years, both Facebook and Twitter have aggressively stepped up efforts to monitor and ban users who promote or engage in hate-fueled violence — that is, who post content often referred to as “terrorist.”

Facebook defines a terrorist organization as “any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government, or international organization in order to achieve a political, religious or ideological aim.” Twitter says it monitors and removes posts from sources that fall under “national and international terrorism designations,” as well as from “violent extremist groups” that self-identify as extremist, engage in violence, and target civilians.

These rules would seem to suggest that Facebook and Twitter could remove a considerable amount of content from far-right organizations. But their enforcement of those guidelines is complicated. Although Facebook reports that the company is getting much better at identifying content from Islamist extremists, its tools for identifying hate speech — the category in which far-right content often falls — still lag behind. Twitter, meanwhile, has been criticized by journalists and politicians for not devoting as many resources to removing, or “de-platforming,” white supremacist accounts as it has to Islamic State accounts.

Republicans don’t think Trump’s tweets are racist. That fits a long American history of denying racism.

The definitions are hard — and politically contested

Although Facebook and Twitter’s definitions of terrorism are very broad, the companies have employed those definitions selectively. For international groups, Facebook has relied on the United States’ Foreign Terrorist Organization (FTO) list. After the Trump administration designated the Islamic Revolutionary Guard Corps (IRGC) a terrorist organization in April, Facebook and Instagram removed several IRGC members’ profiles, raising concerns about the platforms’ quick acquiescence to the U.S. government’s unprecedented and controversial decision to designate part of a foreign military as a terrorist group.

Defaulting to government designations is common. After Canada designated neo-Nazi group Blood & Honor as a terrorist organization, Facebook removed the page of the group’s Canadian branch. In April, Facebook similarly banned several far-right organizations in the United Kingdom after the government proposed fining tech companies that did not remove “unlawful” content.

This deference to government designations creates two problems. First, social media companies may be advancing the political interests of governments rather than fighting violence on their platforms. Research shows that the U.S. often places organizations on the FTO list — or leaves them off — for political reasons. This means some violent organizations that are very similar to designated terrorist organizations remain unlisted. For instance, the Haqqani Network, an insurgent organization operating in Pakistan and Afghanistan, remained unlisted until 2012 despite its close cooperation with al-Qaeda. Given the Haqqani Network’s links to Pakistan’s state intelligence agency, listing the network would have required naming Pakistan a state sponsor of terrorism. That would have been politically awkward, as the United States cooperates with the Pakistani government on counterterrorism and counterinsurgency operations.

Politically motivated designations aren’t limited to the United States. Russia added journalist Svetlana Prokopieva to its official list of terrorists and extremists, without levying any charges. If social media companies continue following government designations, individuals and organizations whom governments do not like might be banned as terrorists.

Second, relying on government designations can create enforcement gaps. Unlike Canada and the United Kingdom, the United States has no list of domestic terrorist organizations that’s the equivalent of the FTO. While Twitter and Facebook de-platform white supremacists in other countries, they cannot rely on U.S. designations to do the same, even when such groups call for or engage in violence of the kind that violate the social media companies’ guidelines. How should social media companies regulate content that is legally proscribed in one country but not another?

Germany provides one example. When authorities in the state of Lower Saxony banned Besseres Hanover (Better Hanover) in 2012, Twitter blocked the group’s accounts in Germany only. Numerous neo-Nazi and far-right groups are illegal under German law and also qualify as “terrorist” under social media platforms’ definitions. But when I interviewed German bureaucrats and security professionals in Berlin this summer, they told me that the rise of the far-right Alternative for Germany (AfD) party means that far-right views are becoming more acceptable in public discourse. As one person put it, “Things that were not sayable two years ago are sayable now.”

If social media companies continue to take cues from governments about what organizations are or are not terrorist threats, changing political priorities could further complicate efforts to ban violent far-right content.

The controversy isn’t going away

Private companies have long been involved in national security. But here’s what’s new: Social media companies now have the opportunity to set international standards about what does and does not count as “terrorism” online. The United Nations Special Rapporteur for human rights and counterterrorism has already expressed concerns about Facebook’s potential to set global standards — a task typically reserved for governments working through the U.N. or other international organizations.

But the definitions may not be the main problem. Enforcement continues to reflect U.S. and European ideas that terrorism is a foreign problem rooted in Islamist extremism, rather than a type of violence stemming from any number of political ideologies. Facebook and Twitter are reinforcing common understandings of what does and does not count as terrorism, with consequences for which security threats we take seriously.

Don’t miss anything! Sign up to get TMC’s smart analysis in your inbox, three days a week.

Anna Meier (@annameierPS) is a PhD candidate in political science at the University of Wisconsin-Madison.