Article

It’s Time to Examine How Online Disinformation Affects Social Groups

Computer numbers

July 26, 2018

By Samuel Woolley | Belfer Fellow with ADL's Center for Technology and Society

 

Computational propaganda — the use of automation and algorithms on social media to manipulate public opinion — has become a global phenomenon. This has come as a surprise to many in the technology sector, and it’s quite the adverse development given that less than a decade ago experts hailed social media as a savior for democratic communication. Russian manipulation of Facebook during the 2016 U.S. election is, however, only the tip of the digital disinformation iceberg.

Online political harassment has hindered free and open journalism in Turkey, bot armies have been used to threaten the lives of activists in Mexico, and the same automated fake accounts have suppressed dissent in the Philippines. Communities around the world have not only become skeptical of the news they see online, but they have become targets for focused political trolling campaigns that prevent them from using social media as an effective means for sharing civic information. 

Considering the growing body of research on this subject, it would be easy to think of computational propaganda as a broad phenomenon isolated to national elections and security crises. My  work at the University of Oxford and Digital Intelligence Lab at the Institute for the Future has examined the use of political bots and digital disinformation during major elections and events in the U.K., U.S., Brazil, Taiwan, Venezuela, and many other countries. Other scholars have studied online manipulation during similarly momentous occasions in France, Georgia, GermanyRussia, Syria and elsewhere.

But these analyses — generally focused on assessments of big data from social media platforms coupled with field work with people who make and track the tools of computational propaganda — fall short in acknowledging the fact that digital disinformation and politically motivated trolling have become a daily affair. These studies also fail to fully illuminate the fact that already embattled social groups and single issue voters are frequently and disproportionately the targets of online propaganda attacks. We need more research that explores the human element of computational propaganda. I am excited to tackling exactly this as an ADL Center for Technology and Society Belfer Fellow.

In the United States — during the 2016 election, in several special elections, and in the ongoing run-up to the 2018 mid-terms — marginalized social and issue-focused groups have been the primary targets of online propaganda attacks. In 2016, Jewish Americans and other minority groups were targeted with several disinformation and anti-Semitic trolling campaigns. Facebook ads and group pages purporting to be from Black Lives Matter groups were spread by Russia. During the 2017 Virginia gubernatorial race, automated Twitter accounts were used to stoke anger against a Latino advocacy group. In current contests for toss-up Senate seats in Arizona and Nevada, we are seeing a troubling rise of anti-immigration online propaganda.

It is time for researchers, policy makers, companies and civil society to work to understand how computational propaganda affects minority populations. We must work to protect social groups and issue advocates from this burgeoning form of political manipulation and defamation because their participation is integral to a functioning democracy. If the voices beyond the status quo cannot be heard, then all people run the risk of falling prey to the cynicism and polarization at the heart of computational propaganda. The consequences are far-reaching and I’m looking forward to finding answers as part of the ADL CTS team. Look to this space for updates.