Home > News > Conservatives say Google and Facebook are censoring them. Here’s the real background.
147 views 8 min 0 Comment

Conservatives say Google and Facebook are censoring them. Here’s the real background.

The social media giants say they don’t want to regulate political speech. But they already are.

- July 31, 2019

The White House hosted a July 2019 Social Media Summit, inviting about 200 conservatives and right-wing activists to discuss their allegations that Facebook and Twitter censor their messaging. The summit capped a year of charges by the right that Silicon Valley tech firms have a liberal bias.

But these charges come in the face of considerable evidence that conservative news outlets outperform others on social media. Last week, the charges turned bipartisan. Rep. Tulsi Gabbard (Hawaii), a Democratic presidential candidate, filed a $50 million suit against Google, alleging that a temporary account suspension infringed on her free speech.

Why do these censorship charges persist?

Our research reveals one likely reason: Facebook and Google don’t make clear either their guidelines or reasoning for accepting or rejecting paid political content or the process by which they make those decisions. This lack of transparency may lead outsiders to believe the worst, especially when tech chief executives make political statements that don’t accord with their prospective customers’ beliefs.

Facebook and Google’s rules are vague but important

Over the past two years, we interviewed former employees of Facebook and Google and political practitioners from campaigns, political organizations and digital political consultancies. We also analyzed email exchanges between Facebook and campaigns to investigate how these firms moderate paid political speech such as campaign advertising. We focused on advertising, or paid content — the domain where these companies are likely to have the most formal policies and transparency around their decisions.

But Facebook and Google seldom disclose much about how they make decisions on moderating political content. Both firms require campaigns to adhere to a broad set of advertising standards that can be interpreted very flexibly. For example, Google bans “inappropriate content” such as “intimidation” and “discrimination,” but it says nothing about what these things mean in practice.

Here’s one example. Political practitioners told us that advertising that focuses on the politics of assault rifles, for or against, might run afoul of Google’s rules. At Google, algorithms vet most advertising for “inappropriate content.” When an algorithm flags an ad, it then goes to human reviewers. If reviewers reject the ad, they give very little explanation — failing to clarify, for instance, why an ad about the politics of assault rifles counts as “inappropriate content.” As a result, campaigns don’t know how to design ads that meet the standards; that limits the range of political topics on which politicians can campaign.

Keeping the rules vague allows these firms maximum flexibility to interpret their own rules. The campaign staffers we interviewed reported that company representatives generally do not explain or justify these decisions. Journalists, researchers and individuals who have an interest in how platforms moderate political speech remain largely in the dark.

The lack of transparency makes it hard for campaigns to contest any decision to turn down an ad. When we visited the offices of a prominent conservative organization, senior staffers showed us ads rejected by platform companies with little more than a one-word response to explain the rejection. Since the companies offer so little justification or opportunities to appeal, some campaigns turn to the press to air their grievances.

Big campaigns and consultants have a special inside track

Larger campaigns and consultancies may have an advantage over their smaller counterparts because they are assigned Facebook and Google account representatives who work with campaigns along partisan lines — Democratic staffers working with Democratic campaigns, and Republicans with Republican campaigns. These Facebook and Google staffers have often been digital political practitioners themselves, having worked on campaigns before joining tech firms, and can advise on what sorts of ads may or may not get approved. These staffers cannot approve or deny particular ads, but they can escalate an appeals process and argue a client’s case within the broad framework of existing rules.

Former Google and Facebook staffers told us about times when their clients’ ads had been rejected, and they had appealed within their companies for reconsideration and asked why certain ads were denied or what policies meant. Many of these discussions are hidden from public view. We analyzed emails, given to us privately by one of the people in the exchange, among Facebook staffers and political staffers working on a 2017 gubernatorial campaign, to examine how the company advised campaigns to deal with speech from an opponent that appeared to violate the platform’s ad policies. Facebook often suggested that the campaign should run ads of its own to counter the opponent’s claims. The documents we reviewed suggest that when Facebook did act to take down deliberate misinformation and misleading content, Facebook gave different explanations to the campaign involved and to the public; these explanations conflicted with one another; and the explanations changed over time.

Certainly, deciding when a political ad has crossed the line from provocative to irresponsible is a nuanced and difficult decision. Both Google and Facebook reportedly have extensive internal debates about what constitutes inappropriate content. But the current approach does not provide either transparency to campaigns or disclosure to the public.

Things may be changing — somewhat

Are there shifts toward greater transparency? In November, Mark Zuckerberg proposed that Facebook create an independent oversight board for content-moderation decisions; the company undertook a worldwide feedback process. In June, Facebook released its report about that feedback and outlined next steps. Twitter announced it may include a warning label on some tweets, effectively tagging politicians’ messages that violate the company’s rules for abuse or harassment.

But serious problems appear to remain unaddressed. Earlier this year, Facebook removed ads placed by Sen. Elizabeth Warren’s campaign that called for the company to be broken up, claiming the ads violated rules against using the Facebook logo. After an outcry, Facebook restored the ads.

What’s next as we head into the 2020 presidential election? As candidates pour millions of dollars into political ads on Facebook and Google, those companies’ unclear and inconsistently applied advertising standards suggest more controversy to come.

Shannon C. McGregor researches political communication, social media and public opinion as an assistant professor in the department of communication at the University of Utah (@shannimcg).

Daniel Kreiss researches technology and electoral politics as an associate professor in the School of Media and Journalism at the University of North Carolina at Chapel Hill (@kreissdaniel).