Home > News > ‘Deepfakes’ are here. These deceptive videos erode trust in all news media.
148 views 8 min 0 Comment

‘Deepfakes’ are here. These deceptive videos erode trust in all news media.

Tricksters and trolls may not persuade people these videos are real. But they do damage nevertheless.

- May 27, 2020

In January 2020, Facebook and Twitter published new policies for dealing with “deepfakes,” or what the industry terms “synthetic media” — audiovisual clips created with the support of artificial intelligence (AI). These techniques are so effective that it’s extremely difficult — often impossible — to tell the content is fake. Deepfakes are so named because they rely on “deep learning,” a branch of AI.

In other words, dirty tricksters now have the technology to create videos in which it really does look like a prominent politician is violently cursing at a baby — or worse.

That’s worrisome for the 2020 U.S. presidential and congressional election campaigns. Facebook says it will remove any “misleading manipulated media.” Twitter says it is “very likely to remove” any material that is “significantly and deceptively altered or fabricated, shared in a deceptive manner, and likely to impact public safety or cause serious harm.”

But AI researchers say it might not be technically possible to spot deepfakes before they spread virally on social media. While there’s plenty of reason to fear such false videos may mislead voters, our research finds the real problem is a bit different: It’s likely to spread distrust of all news on social media, further eroding public debate.

How we did our research

One study found people correctly identify fakes in only about 50 percent of cases — statistically as good as a coin toss. We built on this in our recent study, which explains our findings from a large-scale experiment on a representative sample of the U.K. public. We recruited 2,005 respondents from a large panel maintained by Opinium Research and interviewed them online. This sample resembles the voting-age U.K. population in gender, age and educational attainment.

We first measured our respondents’ levels of trust in news that they find on social media. We then divided them into three groups, each of which saw a different cut we made of a well-known 2018 BuzzFeed educational deepfake, created to alert the public to the problem. The deepfake went viral at the time, with millions of shares across the main social media platforms. In it, an AI-generated Barack Obama says things the real Obama would never say in public — including profanity about President Trump.

Halfway through the video, the screen splits, showing the synthetic Obama’s and Hollywood actor Jordan Peele’s faces speaking simultaneously as we see that we’re actually listening to Peele doing a voice impersonation of Obama. By cutting the video into three different segments and randomly assigning individuals to view one of them, we could estimate the effects of watching deceptive and educational deepfakes on participants’ beliefs and levels of trust in news on social media.

The first group saw an edit that lasted four seconds, and showed Obama calling Trump a slightly foul word. This is the kind of short-form video tricksters and trolls are most likely to create and share on social media. The second group saw the first 26 seconds of the video, including the fake Obama’s full statement, but without the on-screen revelation that Peele, not Obama, was speaking. The third group saw the whole BuzzFeed video, which starts with the fake Obama but then reveals how deepfakes work, alerting viewers to beware.

Then we first asked participants whether they believed Obama had ever called Trump the offensive word, and next, how much they trusted news on social media.

Deepfakes confuse people

When we analyzed the responses to these three different viewings, we found the first two — the ones that were deceptively edited, without the revelation that it was fake — were no more likely to mislead participants than the educational video. In other words, they were no more likely to convince them Obama had actually insulted Trump.

However, people who watched the two deceptive deepfakes were much more likely to feel uncertain about whether what they saw was real — in other words, whether Obama really was speaking. In the group that saw the full video, 27.5 percent said they were uncertain about its veracity. But of those who saw the two deceptive videos, on average 36 percent said they could not tell if they were true or not. That difference is statistically significant and not due to random noise in our data.

To be sure, British participants may have responded differently to these videos than Americans would. However, 99 percent of the British public are familiar with both Obama and Trump.

Deepfakes reduce trust in news on social media. That matters.

Next, we compared people’s levels of trust in news on social media before and after they had watched the videos. Crucially, we found when people were uncertain whether the deceptive deepfake was real or not, they also had less trust in news on social media than did those who were not uncertain — even after controlling for participants’ levels of trust, as measured before the experiment.

Why does this matter? Declining trust may be a rational response to the wave of online disinformation scandals in the past few years. But most Americans now get their news online, and almost half of Americans get their news on social media.

In other words, deepfakes’ biggest threat to democracy may not be direct but indirect. Deepfakes might not always fool viewers into believing in something false, but they might contribute to skepticism and distrust of news sources, further eroding our ability to meaningfully discuss public affairs.

If left unchecked, deepfakes are likely to contribute to a damaging attitudinal spiral: Fabricated content shared on social media breeds uncertainty; uncertainty breeds distrust; distrust breeds cynicism, and cynicism makes people less careful about the quality of the content they share on social media.

Our research suggests politicians, social media platforms and citizens might want to prepare for the spread of deepfakes. In addition to limiting their spread online, policymakers may wish to take active measures to boost news consumers’ trust in good quality information.

Cristian Vaccari (@prof_vaccari) is professor of political communication and co-director of the Centre for Research in Communication and Culture at Loughborough University. His books include “Digital Politics in Western Democracies: A Comparative Study”(Johns Hopkins University Press, 2013).

Andrew Chadwick is professor of political communication and director of the Online Civic Culture Centre at Loughborough University whose books include “The Hybrid Media System: Politics and Power” (Oxford University Press, 2017).