Home > News > Facebook has an invisible system that shelters powerful rule-breakers. So do other online platforms.
144 views 7 min 0 Comment

Facebook has an invisible system that shelters powerful rule-breakers. So do other online platforms.

Is it fair to be unfair, as long as you’re open about it?

- September 17, 2021

Last week, the Wall Street Journal published Jeff Horwitz’s investigation into the inner workings of Facebook — with some troubling findings. Internal documents suggest that Facebook’s top management dismissed or downplayed an array of problems brought to their attention by product teams, internal researchers and their own Oversight Board. These include a report on what is known as the XCheck program, which reportedly allowed nearly any Facebook employee, at their own discretion, to whitelist users who were “newsworthy,” “influential or popular” or “PR risky.” The apparent result was that more than 5.8 million users were moderated according to different rules than ordinary Facebook users, or hardly moderated at all.

This system of “invisible elite tiers,” as the Journal describes it, meant that the speech of powerful and influential actors was protected while ordinary people’s speech was moderated by automated algorithms and overworked humans. As our research shows, that’s not surprising. Other platforms besides Facebook enforce different standards for different users, creating special classes of users as part of their business models.

Why Facebook really, really doesn’t want to discourage extremism

Unequal and opaque standards can breed suspicion among users

In a recent research article, we explain how another important platform, YouTube, takes what we call a “tiered governance” approach, separating users into categories and applying different rules to each category’s videos. YouTube distinguishes among such categories as media partners, nonprofits and governments. Most important, it distinguishes between “creators” who get a slice of its ad revenue and ordinary users. Even among those paid creators, YouTube has a more subtle array of tiers according to popularity.

Facebook’s program began as a stopgap measure to avoid the public relations disasters that might happen if the platform hastily deleted content by someone powerful enough to fight back, such as a sitting president. YouTube’s program began when it created a special category of paid creators, the YouTube Partner Program, to give popular YouTubers incentives to stay on the site and make more content.

YouTube then began to create more intricate tiers, providing the most influential creators with special perks such as access to studios and camera equipment. An elite few had direct contact with handlers within the company who could help them deal with content moderation issues quickly, so that they didn’t lose money. But things changed when advertisers — YouTube’s main source of revenue — began to worry about their ads being shown together with offensive content. This drove YouTube to adjust its policies — over and over again — about which creators belonged to which tiers and what their benefits and responsibilities were, even if the creators didn’t like it.

Creators were understandably frustrated as these arrangements seemed to keep shifting under their feet. They didn’t object to different rules and sets of perks for different tiers of creators, but they did care that the whole system was opaque. Users like to know what to expect from platforms — whether they will enforce guidelines, and how much financial compensation they provide. They didn’t like the unpredictability of YouTube’s decisions, especially since those decisions had real social, financial and reputational impact.

Some were frustrated and suspicious about the platform’s real motives. Opacity and perceptions of unfairness provided fuel for conspiracy theories about why YouTube was doing what it was doing. Creators who didn’t know if YouTube’s algorithms had demonetized or demoted their videos began to worry that their political leanings were being penalized. This led to anger and despair, which was worsened by YouTube’s clumsy appeals system. And it gave fodder to those eager to accuse YouTube of censorship, whether it was true.

People are more anti-vaccine if they get their covid news from Facebook than from Fox News, data shows

It’s fair to be unfair, as long as you’re fair about it

Social media companies such as YouTube and Facebook have suggested that their platforms are open, meritocratic, impartial and evenhanded. This makes it hard for them to explain why they treat different people differently. However, other systems for adjudication make distinctions, too. For example, criminal law takes into account whether the accused is a child, impaired, a repeat offender, under the influence, responding in self-defense or under justifiable duress.

Similarly, there are plausible reasons platform companies might want to treat different tiers of users in different ways. For example, for postings about the coronavirus, it made sense to establish different rules for those who had established themselves as trustworthy. To decrease the spread of misinformation or harassment, platforms might reasonably want to impose higher standards rather than lower ones on users who had many followers, who held political office and had special obligations to the public, or who paid or received money to post.

But YouTube’s experience suggests that clarity about why different users are treated differently matters for public perception. When a company such as Facebook discriminates between different tiers of users just to avoid offending powerful people and mitigate possible PR disasters, observers will treat that reasoning as less legitimate than if the company were trying to hold the powerful to account. This is especially so if the differences are kept hidden from users, the public and even Facebook’s own Oversight Board.

These allegations are likely to breed distrust, accusations of bias and suspicions about Facebook’s intentions.

Check out TMC’s expanding list of discussion topic guides.

Robyn Caplan is a researcher at Data & Society Research Institute. Follow her @RobynCaplan.

Tarleton Gillespie is a senior principal researcher at Microsoft Research, and the author of “Custodians of the Internet” (Yale University Press, 2018). Follow him @TarletonG.