Article

Computational Propaganda and the Democratic Process

computational propaganda

March 01, 2019

By Samuel Woolley | ADL Center for Technology and Society Belfer Fellow

On October 27, 2018 Robert Bowers shot and killed eleven people at the Tree of Life synagogue in Pittsburgh. Bowers was at least partially radicalized online. He made regular use of the extremist-friendly social media platform Gab. After months of anti-Semitic and anti-immigrant posts on the site he even wrote that he was “going in” shortly before the massacre.

The day before the shooting, the ADL (the Anti-Defamation League) released a report detailing the rise of anti-Semitic trolling and disinformation in the lead up to the 2018 U.S. midterm elections. The piece, written by Katie Joseff and myself, exposed a burgeoning cascade of computational propaganda about Jewish people surrounding the election. Using empirical data from qualitative and quantitative analysis, we argued that automation—in the form of social media bots—and anonymity allowed attackers the ability to scale offensives against the Jewish community while simultaneously remaining untraceable. More frighteningly, especially given the next day’s mass-shooting, we pointed to the correlation between online threats, doxing and disinformation campaigns and offline violence against Jews.

Here are a few other things we learned:

  • First, this work has affirmed that “big data” analyses of posts on Twitter and other social media sites are useful at showing the breadth of computational propaganda about a given socio-political issue.
  • Qualitative work, on the other hand, is incredibly useful for drawing conclusions about the depth of human trauma associated with online hate and disinformation campaigns.
  • These two methods together allow for analyses of political manipulation online that are grounded in both human experience and efforts to reveal causal relationships between propaganda and our actions and beliefs.
  • Additionally, anyone who researches online trolling and computational propaganda faces a serious challenge when it comes to objectively labeling posts as hateful or a news story as false.
  • And finally, there is often a false separation between online hate or disinformation campaigns and offline violence.

On the first point: while our analysis of social media data revealed the use of both human accounts and political bots in spreading anti-Jewish content, it did not provide insight into how this vitriol and spin actually affected Jewish people or broader outcomes at the ballot box. It did little to reflect the diverse experiences of the complex U.S. Jewish community. Ongoing interviews with members of various sub-groups, and with people from both sides of the political spectrum, have done far more to illuminate the very real consequences of an online sphere lacking sensible regulation from the U.S. government. They bring to light the way that data analysis of a social issue can, on its own, reinforce the false dichotomy between what happens online vs. offline.

Researchers of digital disinformation, what Facebook now calls “information operations”—taking a note from military and intelligence experts, will continue to be tested by the need to discern true from false or hate speech from free speech. In our research with the ADL we faced similar challenges, specifically in labeling particular tweets as derogatory, benign or something in-between. We also had to make calls about the types of groups, from white nationalists to the Proud Boys, that should be labeled as hate groups. Research from the ADL’s Center on Extremism will continue to be useful as a resource in this regard, especially their reports on far-right terrorism and the state of U.S. white supremacy.

Finally, six years of research on computational propaganda—and my work on this recent report—reveals that different groups are quite polarized when it comes to how they perceive the efficacy of political communication online. While many people argue that the rise in digital disinformation is a serious threat, others suggest that it has no tangible effects on the public.

On the one hand, pundits such as Malcolm Gladwell and Evygeny Morosov have argued that online political activity has little effect on what people do in the offline world. Critiques of “slacktivism” imply that someone might like something online, but they won’t actually engage in traditional political activities like showing up to protests. These arguments critique a generation of online social movements – from the Arab Spring to Occupy– in which people allegedly engaged in the easiest forms of political expression and then declined to organize in ways that actually made change. While meant to encourage concerted democratic action, critiques like these also undermine the threat of digital disinformation and online hate speech. If experiences online don’t result in political action offline, then how can we explain the fact that a great deal of online political trolling is tied to offline political violence? These arguments create a false barrier between the online and offline world.

On the other hand, since 2016 many have argued that digital disinformation is undermining the very thread of democracy. As the constant rise in doxing, swatting and harassment of journalists shows, people’s every day, offline, lives can be deeply affected by online information. To say otherwise would disregard the very real experiences of people who use digital tools for the purposes of civic-engagement while simultaneously being targeted with those same tools. What we see online—a rise in hate speech against Jews, for instance—is often a reflection of larger problems and trends offline.  Simply put, online and offline are two parts of the same world, the same political sphere and the same communication environment.

This said, there is still a marked need for continued rigorous and scientific research that works to build understandings between computational propaganda and democratic processes. We must work to generate knowledge about both the supply and demand sides of digital disinformation. With the former, who funds or undertakes these campaigns and why? To what extent are social media companies culpable, and who else bears the blame?  With the latter, what happens psychologically to people who consume large amounts of disinformation? Do their votes change? Most crucially, how are minority communities affected?

Make no mistake, computational propaganda attacks are attacks against democracy and human rights. This is true whether they are attempted offensives, which cause serious psychological harm, or successful ones, which continue to lead to offline deaths. It is also true whether its many forms—digital disinformation, political trolling or digital hate campaigns—are leveraged against the Jewish community in the U.S. or other groups around the globe: the Rohingya in Myanmar, Kurds in Turkey, or Muslims in the United Kingdom. Through empirical research we can draw causal conclusions about this emergent problem and the larger issues we face in public life. We must use knowledge to fight back.