Home > News > Fake news is about to get a lot worse. That will make it easier to violate human rights — and get away with it.
281 views 9 min 0 Comment

Fake news is about to get a lot worse. That will make it easier to violate human rights — and get away with it.

- April 3, 2018
Dutch investigators examine pieces of the crashed Malaysia Airlines Flight 17 in the village of Rassipne, in Ukraine’s Donetsk region, in 2014. (Dmitry Lovetsky/AP)

In the past few years, the brave new online world has made it both much easier, and vastly harder, to establish the facts about human rights violations. Human rights investigators are racing to take advantage of, and struggling to keep up with, the latest technology advances, from satellite imagery to artificial intelligence, from fake news to increasingly sophisticated computer-generated visuals.

And it’s all about to get worse. Hollywood and entertainment technologies that thrill moviegoers and gamers are becoming more available to those who create fake news. Soon it will be easy for anyone to fake highly credible photos and videos — and hard for any of us to believe our eyes.

Human rights investigations are about to become exponentially more complicated. Let us explain.

Human rights investigators have been collecting visual evidence of abuse. The abusers figured that out. 

First, the good news. Over the past few years, human rights investigators have started to systematically collect and analyze visual information, including satellite imagery that can expose war crimes and crimes against humanity, and video and photographs shared on social media and in messaging networks that document anything a witness might see, from abuses by police to the aftermath of chemical weapons attacks. In cases like this, seeing is believing.

Or is it? Human rights abusers have learned and twisted human rights investigators’ methods to their own purposes. Consider, for instance, the Russian government’s recent effort to use satellite imagery as “evidence” of falsehoods. When a Russian-developed Buk missile shot down Malaysian Air Flight 17 over a part of Ukraine controlled by pro-Russian rebels in July 2014, killing all 298 persons on board, the Russian government tried to fabricate satellite imagery to link the missile to Ukrainian air defenses. Investigators determined that the imagery had been altered in Photoshop.

[interstitial_link url=”https://www.washingtonpost.com/news/monkey-cage/wp/2018/03/06/why-it-matters-what-we-call-syrian-refugees/”]Millions of Syrians’ lives depend on whether they’re designated as ‘refugees’[/interstitial_link]

Similarly, after human rights groups condemned Russia for targeting the Al-Sakhour hospital in eastern Aleppo in Syria in the fall of 2016, the Russian Defense Ministry claimed the reports were fake. The ministry instead peddled its own purported satellite imagery in a clumsily fabricated denial. And in November 2017, the Russian Defense Ministry released “startling visual proof” that the U.S. military was “assisting” the Islamic State in Syria. But that “evidence” was lifted directly from a game called AC-130 Gunship Simulator. While investigators rigorously demonstrated the disinformation, some consumers of these conflicting narratives will inevitably be skeptical as to who did what.

With the aid of the bots and troll factories we’ve heard so much about to spread disinformation, a perpetrator can sow doubt about even the clearest of crimes and abuses. As political scientist Stanley Cohen noted in 1996, denials by human rights abusers tend to take standard forms, including literal denial — “nothing happened” — and interpretive denial — “what happened is really something else.” Though the techniques may be new, creating doubt about human rights reporting is a pre-digital tactic.

Human rights investigators have developed methods for assessing the truthfulness of digital evidence, methods widely and openly shared within the human rights community. But of course, disinformation peddlers know these methods too — thus inviting a methodological arms race.

[interstitial_link url=”https://www.washingtonpost.com/news/monkey-cage/wp/2017/10/23/homegrown-fake-news-is-a-bigger-problem-than-russian-propaganda-heres-a-way-to-make-falsehoods-more-costly-for-politicians/”]Homegrown ‘fake news’ is a bigger problem than Russian propaganda.[/interstitial_link]

But wait — it’s about to get worse

Nearly anyone with a laptop and an Internet connection can now distort visual reality to offer exceptionally realistic — but utterly fake — photos and videos of events that did not occur, apparently perpetrated by people who were never there.

Using technology called  “DeepFakes” (a portmanteau of computer neural network’s deep learning and fake), hobbyists can now transfer video images of one person’s face to video of another person’s body. This technology, also called Face2Face, has spawned a thriving, competitive community of amateurs creating clever mash-ups of people and faces — and has allowed malicious users to paste celebrity faces on the bodies of pornographic actors. One variant of the technology, so far used only by computer scientists, allows users to animate the facial gestures of a selected “target” with the facial gestures of another person, in real time.

Another artificial intelligence technology called Generative Adversarial Networks makes it easy to fabricate faces, intricate cityscapes and anything else base on a sufficient number of photographs to “train” artificial neural networks — visually believable and utterly false.

All this will strain the concept that “seeing is believing.”

In the human rights realm, that means those hoping to deceive, conceal or confuse now have important new tools that go well beyond Photoshop. We’ve learned how quickly false claims can spread, given what legal scholars Robert Chesney and Danielle Citron describe as “a combination of social media ubiquity and virality, cognitive biases, filter bubbles, and group polarization.” As computer scientist Emilio Ferrara and his colleagues note, “The novel challenge brought by bots is the fact they can give the false impression that some piece of information, regardless of its accuracy, is highly popular and endorsed by many.”

DeepFakes will add visual “evidence” that will make this situation much worse. More sophisticated bots will push relatively easy-to-create AI-fabricated videos and images that are entirely believable, yet completely unreal.

Fake news that you can “see” will be politically explosive in many ways. Human rights groups will have to pour limited resources into sorting real evidence from bot-driven simulacra. How and at what cost will investigators demonstrate that satellite images of mass graves or chemical weapons attacks are real instead of fabricated — or vice versa? Denials and counternarrative campaigns will become increasingly sophisticated. Investigators and non-govermental organizations will struggle to keep up.

[interstitial_link url=”https://www.washingtonpost.com/news/monkey-cage/wp/2018/02/01/facebook-wants-to-drive-out-fake-news-by-having-users-rate-news-outlets-credibility-heres-the-problem-with-that”]Facebook wants its users to drive out fake news. Here’s the problem with that.[/interstitial_link]

In our new world, power has become ever more dependent on controlling the narrative. The ability of independent fact-finders to inoculate against the adaptation of disinformation will determine much about the integrity of the historical record — and our collective ability to hold power to account.

Scott Edwards (@sxedwards) is a senior crisis adviser at Amnesty International and professorial lecturer at George Washington University’s Elliot School of International Affairs. 

Steven Livingston (@ICTLivingston) is a professor at George Washington University with appointments in the School of Media and Public Affairs and the Elliott School, and a senior fellow at the Carr Center for Human Rights Policy and the Harvard Kennedy School. 

This article is one in a series supported by the MacArthur Foundation Research Network on Opening Governance that seeks to work collaboratively to increase our understanding of how to design more effective and legitimate democratic institutions using new technologies and new methods. Neither the MacArthur Foundation nor the network is responsible for the article’s specific content. Other posts can be found here.