In September 2018, the Syrian government announced plans to launch an assault on the rebel-held area of Idlib. The prospect of another chemical-weapon attack has already led to a U.S. threat of missile strikes in retaliation, with President Trump tweeting a warning to Syrian President Bashar al-Assad.
But there’s another weapon being deployed. Russia has begun laying the groundwork for an online influence operation aimed at turning U.S. public opinion against such a strike. How would this Twitter campaign unfold? Here’s what we learned from our recent analysis of a database of over 850,000 tweets collected following the April 2018 chemical attack in Douma, Syria.
We uncovered a major increase — approximately 300 percent — in the creation of new Twitter accounts shortly after the Douma attack. We scrutinized these new accounts — and concluded that at least 44 percent of them were disinformation accounts, almost certainly controlled by Russia.
Some 11 percent of all accounts from this time frame were suspended, likely during Twitter’s sweeping ban of 70 million accounts this year. As a result, our figures may be conservative; fake accounts, such as the ones in our study, were specifically targeted for removal.
What flags an account as “fake?”
We identified a number of indicators:
1. The account was created very recently and is unusually focused on a single topic. A brand-new account that tweets exclusively about chemical warfare is a red flag that the account may have been opened to spread disinformation.
2. The account is inactive for long periods. If an account had existed for a long period of time, we often found that it would remain dormant between major political events. But the account would resume tweeting following an event in which Russia had a stake and then go silent after it was no longer useful.
3. Tweets have grammatical errors or adopt unusual idioms. Not everyone is great at spelling. But if the user can’t handle simple syntax issues like “the” or “a,” the account might be inauthentic.
4. The profile picture is fake. In an effort to seem legitimate, trolls often copy Internet images of ordinary people and use them for their own. Luckily, there’s an easy way to identify this tactic — a reverse image search of the user’s profile picture will often show its true origin.
5. Or there is no profile picture at all. Most of us will take the time to upload a personal photo if we’re setting up a new social media account. Perhaps to avoid exposure by reverse image search (or out of sheer laziness), Russian trolls often won’t upload a picture at all. Of course, this is hardly a conclusive indicator, but it can help red-flag suspect accounts.
6. All tweets are hyperpartisan, and deal with politics. Online troll accounts are unlikely to include daily updates or pictures from the weekend. The goal is to polarize the public, so the messaging almost always reflects a one-sided view of a specific political issue.
Here’s an illustration of how these accounts operate. This is an account from a typical troll — it uses several disinformation themes and strategies.
This troll account demonstrates extreme patriotism — almost in caricature — while at the same time praising Russian President Vladimir Putin. This account is also emblematic of trolls in that it pretends to be a supporter of President Trump. We found that nearly all the accounts in our sample did the same in order to make sure their message resonated with Trump’s political base.
In March, the user posted an image showing a man camping, likely an attempt to make the account seem genuine. We were able to trace the photo to the Instagram account of a popular Russian photographer — making it all the more likely that this is an account controlled by someone from Russia, not a gun-loving American.
What type of disinformation can we expect to see if there is a chemical attack in Idlib?
Here’s an example of the type of disinformation this same Twitter account spread following the Douma chemical attack. Note the user’s mention of the polarizing issue of refugees. It is common to see trolls attempt to exploit domestic political divisions.
Additionally, the Russian strategy deploys the technique of having accounts respond to users we call “high influence individuals.” On Twitter, upon opening a user’s tweet, the responses to that tweet automatically display underneath. By responding to influential Twitter accounts, such as Trump’s, fake accounts have the potential to be seen by millions of other users.
Across the fake accounts that we examined, we observed a number of disinformation themes repeated consistently across multiple chemical attacks. These same themes have already emerged in recent weeks. This suggests that Russia has begun to use disinformation, in preparation for the assault on Idlib.
The most common themes to watch out for include messages that:
- Implicate the White Helmets — group of volunteers that rescues civilians in Syria — in faking or conducting a chemical attack.
- Blame terrorists for carrying out the attack.
- Insist that World War III would break out if the U.S. were to attack the Syrian regime.
- Claim that attacking the Syrian regime would help terrorists.
- Say that the U.S. or U.K. carried out or faked the attack in order to justify a war aimed at overthrowing Assad.
- Proclaim that the “Deep State” is manipulating Trump.
- Warn that the U.S. should not strike against President Bashar al-Assad because he’s a secular leader who protects Syria’s Christians.
Assad has a long track record of using chemical weapons in Syria’s civil war. If he uses them again in Idlib, his ally in Moscow will be sure to come to his aid — perhaps not with weapons but certainly with Twitter accounts. The Russian campaign to spread disinformation is aggressive and evolves rapidly, but these are some of the common hallmarks of suspicious accounts.
Jack Nassetta is a student of political communication and history at George Washington University and was a visiting fellow at the Center for Nonproliferation Studies in Monterey, Calif. Find him on Twitter @jnassetta.
Ethan Fecht is a student of international relations at Brown University and was a visiting fellow at the Center for Nonproliferation Studies in Monterey, Calif. Find him on Twitter @ethan_fecht.
This article is one in a series supported by the MacArthur Foundation Research Network on Opening Governance that seeks to work collaboratively to increase our understanding of how to design more effective and legitimate democratic institutions using new technologies and new methods. Neither the MacArthur Foundation nor the network is responsible for the article’s specific content. Other posts can be found here.