Home > News > No, the National Science Foundation is not building an Orwellian surveillance nightmare
150 views 13 min 0 Comment

No, the National Science Foundation is not building an Orwellian surveillance nightmare

- October 22, 2014

A mapping of how online discussion about Indiana’s Truthy research program has spread.(Truthy/Indiana University)
On Oct. 17, Ajit Pai, an appointed member of the Federal Communications Commission, wrote an op-ed for The Washington Post making scary-seeming claims that the National Science Foundation was funding a scheme to surveil the Web for “subversive propaganda” that seemed “to have come straight out of a George Orwell novel.” As it happens, I know quite a bit about the Truthy project that he is writing about — it’s a very well-regarded academic research project without any ulterior political motive. Truthy is an NSF-funded project, run by computer scientists at Indiana University to study social diffusion on Twitter and the Internet. As anyone could figure out from a few moments of clicking around on the project’s Web site, it is not an evil Orwellian exercise in Big Brother surveillance. The rumor that it is something scary seems to have started with a discredited and disingenuous article at the Daily Beacon.
Filippo Menczer and Alessandro Flammini help run Truthy; I’ve asked them some questions about the program, and about what it is like to have a project studying how rumors propagate on the Internet become the victim of an Internet-propagated rumor.
HF — Ajit Pai, one of the five FCC commissioners, has written an op-ed claiming that Truthy is an Orwellian effort to create a government-funded system of social surveillance aimed at monitoring Tea Party activists. This obviously isn’t what you see yourselves as doing. Why is Pai wrong?
FM & AF — The op-ed by Ajit Pai is based on several premises. First, that Truthy is a government probe of social media. Second, that it sets out to monitor political speech. Third, that it makes some editorial judgment about what is “misinformation.” In reality, while our basic research project is federally funded — like a lot of other university research across the country — the research is conducted by the investigators and graduate students and it is entirely public. All publications, data and tools we produce are publicly available. Furthermore, we do not monitor individual people. The tweets we analyze are public and accessible by anyone.
While we focused on tweets about the 2010 elections in early studies about polarization (which looked at both sides of the political spectrum), our research has the general goal of understanding how information spreads. To this end we use aggregate data over millions of public tweets in the same way that an economist might aggregate thousands of economic transactions to study general economic laws.
As it happens, we completely agree with Pai when he writes: “The government has no business entering the marketplace of ideas to establish an arbiter of what is false, misleading or a political smear […] the merits of a viewpoint should be determined by the public through robust debate.” Indeed, our research is entirely consistent with this view.
It is also undeniable that abuse does exist, and ignoring this problem is not the best way to protect free speech. For example, observations about the underlying patterns of information diffusion could help the public to make better informed decisions about whether a message links to malware that could take over their computer, or discover whether their interlocutor is in reality an automatic “bot” pre-programmed to disrupt debate.
HF — Pai’s op-ed is only the latest in a series of inflammatory stories spreading misleading claims about Truthy. How did these attacks begin?
FM & AF — Our research results have received widespread positive coverage in the national and international press over the past several years, including such venues as the Wall Street Journal, the New York Times, CNN, the BBC  and even The Washington Post. We cannot explain the sudden negative attention, and would rather not speculate about its motives.
We are, however, familiar with the mechanism of misinformation campaigns, and we see all the ingredients here. A first wave of attacks in August was ignited by a story in the Washington Free Beacon. It made very misleading allegations, ignored our body of research and made no effort to verify the accuracy of the allegations by contacting any of the researchers. The story was then picked up as fact by many venues, including Fox News. The current, second wave of attacks, ignited by Pai’s op-ed in The Washington Post, also disregarded the clarifications we posted in our blog and the debunking of the original story published in Columbia Journalism Review. Commissioner Pai did not contact us to inquire about our research. Our research methodologies do allow you to see how these flawed interventions have influenced online discussion. The illustration below provides a “map” of how the #truthy meme has spread on Twitter through retweets and mentions, following this second “injection” of the meme.


HF — Many of the people who are offended by Truthy think that there’s something wrong with treating political beliefs and memes as spreading like contagions. While this is actually not a weird idea (it underlies the metaphor of “going viral”), it’s one that is commonly misunderstood. What do we gain by thinking about the spread of political beliefs as spreading by contagion across a social network?
FM & AF — The idea of modeling the spread of information in the same way as an infectious disease originated in the 19th century. Since the 1960s, a large body of research have exploited this analogy. Imagine people hearing a joke at a party, repeating it to their colleagues at work, who in turn tell it to their families. One can think of the joke as a sort of virus being transmitted from person to person. Of course, when people communicate about important topics such as politics, the mechanisms of “infection” are much more complex. For example, a piece of information can gain some initial popularity passing from person to person, then be rebroadcast by mass media to reach a much larger audience, then be amplified again via social media. Also, retelling a joke is not like changing someone’s opinion about something important. That usually requires a much more intense set of interactions. But these are the kinds of general phenomena that our research aims to uncover — how information spreads, and how it (sometimes) changes people’s minds.
HF — Your research identifies how Twitter users who talk about U.S. politics cluster around their political ideologies, so that conservatives tend to talk primarily to other conservatives and liberals to other liberals. How does this shape Twitter conversation between people with different political ideologies?
FM & AF — Political polarization is an important topic for research. Pew data show that since 1960, political parties and congresspeople have shifted towards more radical positions, reducing the space for constructive debate and compromise. However, we do not have good statistical evidence about how political discourse is polarized among the general public. Our analysis of how Twitter hashtags spread through retweets and mentions identified two large communities of right- and left-leaning users and found that indeed, there is an extremely low level of information sharing between the two groups. The net effect is that of echo-chambers, where people are exposed to information that reinforces their existing beliefs and not to diverse opinions. We also looked at the tweets of the two groups to understand which issues (for instance taxes, unemployment, defense, etc.) are more likely to be discussed across groups.
HF — As social media plays an ever-increasing role in driving politics, U.S. foreign policy makers need to understand how e.g. Twitter can help fuel social protest movements abroad. What insights can Twitter research provide e.g. in understanding recent protests in Turkey, Europe and the Arab world?
FM & AF — We have conducted a couple of case studies in this area, always focusing on aggregate data rather than single individuals. One looked at Occupy Wall Street. We found that Twitter was used both for practical purposes related to organizing protests on the ground, and to discuss general issues related to the goals and the motives of the protest.
These two parallel conversations lead to distinctive patterns of tweeting. It is possible — albeit very crudely — to extract the keywords that are more common in these two different conversations.
We also investigated whether the protest has had any long-term effect on its participants, looking at the themes that they were interested in tweeting about before and after the protests. In a more recent study we analyzed the pivotal role played by social media in the Gezi Park movement in Turkey, examining how the conversation shifted over space and time, and showing the influence of geography on what was discussed. We also found that the conversation becomes more democratic as events unfold, in the sense that ordinary users play a bigger role in conversation. Finally, we found that outside events such as political speeches or police actions changed the ways in which Twitter users behaved online.
We believe these topics are of interest to policymakers as well as the general public. As social media become more ubiquitous in our everyday lives, they become more attractive targets of abuse. For instance, social bots have been reportedly used to suppress free speech by disrupting communication among protesters in Iran and Russia. Pro-democracy protesters in Hong Kong have to deal with efforts to infect their computers with malware. If we’re to understand this abuse, and help people stop it, we need good research. That is how good research can help improve democratic debate — not by judging which political argument is better or worse, but by identifying attempts to systematically abuse technology in order to drive people out of online conversation.