Home > News > It’s not easy to spot disinformation on Twitter. Here’s what we learned from 8 political ‘astroturfing’ campaigns.
135 views 10 min 0 Comment

It’s not easy to spot disinformation on Twitter. Here’s what we learned from 8 political ‘astroturfing’ campaigns.

Hint: Don’t look for an account that tweets like a bot.

Facebook recently announced it had dismantled a number of alleged disinformation campaigns, including Russian troll accounts targeting Democratic presidential candidates. Over the summer, Twitter and Facebook suspended thousands of accounts they alleged to be spreading Chinese disinformation about Hong Kong protesters.

Disinformation campaigns based in Egypt and the United Arab Emirates used fake accounts on several platforms this year to support authoritarian regimes across nearly a dozen Middle East and African countries. And the Mueller report describes in detail how Russian trolls impersonated right-wing agitators and Black Lives Matter activists to sow discord in the 2016 U.S. presidential election.

How can you distinguish real netizens from participants in a hidden influence campaign on Twitter? It’s not easy.

Conservatives say Google and Facebook are censoring them. Here’s the real story.

Hidden campaigns leave traces

We examined eight hidden propaganda campaigns worldwide, comprising over 20,000 individual accounts. We looked at Russia’s interference in the 2016 presidential election, and the South Korean secret service’s attempt to influence that country’s 2012 presidential election. And we looked at further examples associated with Russia, China, Venezuela, Catalonia and Iran.

All of these were “astroturfing” campaigns — the goal is to mislead the public, giving a false impression that there is genuine grass-roots support or opposition for a particular group or policy.

We found that these disinformation campaigns don’t solely rely on automated “bots” or bot accounts — contrary to popular media stories. Only a small fraction of the 20,000 accounts we reviewed (between 0 and 18 percent, depending on the campaign) are “bot accounts” that posted more than 50 tweets per day on a regular basis — a threshold some researchers use to distinguish automated accounts from bona fide individual users.

This isn’t a big surprise. Insiders have long reported that humans, not programmers or AI, are behind these “troll farms.” Court records published in the case against the South Korean National Intelligence Service (NIS) mentioned above paint a similar picture of their internal organization.

Who’s winning the Democratic debates? Here’s what Google can’t tell you.

Trolls behave like you and me

That means that examining whether an account tweets a lot of spamlike content or behaves like a “robot” will detect only a fraction of the astroturfing accounts. The key to finding many of the rest lies in the fact that humans are paid to post messages in these astroturfing accounts.

By looking at digital traces left by astroturfing accounts, we found unique patterns that reflect what social scientists call the principal-agent problem. The people behind astroturfing accounts are not intrinsically motivated participants of a genuine grass-roots campaign, so they will try to minimize their workload by taking shortcuts just to finish assigned tasks. Astroturfers (the “agents”) will find all kinds of ways to make their lives easier instead of doing what their boss (the “principal”) would like them to do.

In addition, the agents’ actions may reflect the timing of instructions they got from the principal to achieve specific campaign goals at key campaign moments. True grass-roots activists, in contrast, usually react in a more organic fashion, with more variation in message contents and timing.

The secret service agents in South Korea, for instance, were instructed to cover a specific agenda at the beginning of every workday. They apparently considered their work a 9-to-5 job: The figure below shows that most of their tweets (the red line) were posted during office hours on weekdays, while ordinary Koreans (black, dashed line) tweet more in the evening and on the weekends.

Tweet time of astroturfing (NIS) accounts and regular accounts in South Korea. The red line indicates NIS accounts; the dotted line indicates regular accounts. (Figure: David Schoch)
Tweet time of astroturfing (NIS) accounts and regular accounts in South Korea. The red line indicates NIS accounts; the dotted line indicates regular accounts. (Figure: David Schoch)
Tweet time of astroturfing (NIS) accounts and regular accounts in South Korea. The red line indicates NIS accounts; the dotted line indicates regular accounts. (Figure: David Schoch)
Tweet time of astroturfing (NIS) accounts and regular accounts in South Korea. The red line indicates NIS accounts; the dotted line indicates regular accounts. (Figure: David Schoch)

The same is true for other astroturfing campaigns. Other research has noted, for instance, that Russian trolls tend to be more active during office hours, St. Petersburg time.

Of course, not every account that tweets during office hours is a troll account. So what other hints are there? As we show in our research, message coordination is another clue. Any information campaign — even genuine grass-roots movements — will feature large numbers of accounts that post similar tweets. But because astroturfers receive centralized instructions and try to avoid having to work too hard, they are much more likely to post similar or even identical tweets within a very short time frame.

To detect and illustrate this phenomenon, we traced networks of coordinated messaging, connecting accounts that tweet or retweet the same content within a one-minute time window or that frequently retweet each other.

Here’s how social media can both weaken — and strengthen — our democracy.

Coordinated messages are a giveaway

The differences between regular users and astroturfers are stark. In the South Korean election case, there are 153,632 instances of account pairs posting the same original tweet within a minute, while this kind of co-tweeting does not occur once among a group of ordinary Korean-speaking users of similar size. You can find the corresponding network shown here.

The co-tweet networks in the other seven campaigns were similarly concentrated. The co-tweet network of the Russian Internet Research Agency (IRA) campaign during the 2016 presidential election followed a familiar pattern. Researchers have shown that the IRA campaign targeted both ends of the political spectrum and therefore posted very different messages. But our research showed that left-wing trolls impersonating Black Lives Matter activists and the right-wing accounts posted 1,661 identical tweets.

Here’s why: It turns out that some of these agents found common ground during the #OscarsSoWhite debate. Even though they were supposed to impersonate very different characters, they ended up copy-pasting similar messages.

Our research framework could thus help social media platforms identify astroturfing campaigns more efficiently. It’s not clear whether all these companies systematically look for this type of coordinated messaging. Twitter, for instance, has released the Iranian campaign as five different data sets at different times, despite the fact that accounts in different data sets tweet the same messages. That seems to indicate that Twitter did not look for co-tweeting as we define it.

So how do these findings help identify Russian trolls on Twitter? Looking at accounts individually may not reveal much — the real hints are when there’s a group of suspicious accounts.

Are there other accounts following or retweeting the suspected troll accounts? Or accounts that tweet the exact same tweet at the same time? If there’s a group of suspect accounts, how similar are they, and do they co-tweet? These are clear signs of an astroturfing campaign.

Franziska Keller is assistant professor in the Division of Social Science at the Hong Kong University of Science and Technology, specializing in comparative politics and social network analysis.

David Schoch has a PhD in computer science and is a Presidential Fellow at the University of Manchester in the Department of Sociology. His research focuses on applications in social network analysis.

Sebastian Stier is a senior researcher in the Department of Computational Social Science at GESIS in Cologne with a research focus on political communication, comparative politics and the use of computational methods in the social sciences.

JungHwan Yang is assistant professor in the Department of Communication at the University of Illinois at Urbana-Champaign. His research focuses on political communication, audience behavior and media effects.