Related Content
June 27, 2019
By Grant R. Vousden-Dishington | Research Software Engineer, ADL's Center for Technology and Society
From the moment the attacker began live-streaming his lethal assault on the Al-Noor Mosque and Linwood Islamic Center in Christchurch, New Zealand, the significance of social media in driving the conversation about the incident was inevitable. Despite local law enforcement alerting Facebook to the live broadcast of the attack on their platform, and rapid action taken by Facebook to prevent its spread, videos and images of the attack proliferated virally across multiple platforms - which could have been mitigated by better upfront design. On Twitter, content concerning Islam and Muslim topics swelled to 10x its normal daily volume within hours of the news breaking, according to ADL’s analysis. While much of this content came from people expressing their shock and sadness, some used the white supremacist attack on Muslim worshipers as an opportunity to talk about violence conducted by Muslims rather than against them, and accounts that appear to be automated bots amplified this skewed message exponentially.
In collaboration with the social media monitoring start-up PushShift, ADL’s Center for Technology and Society (CTS) collected and analyzed over 17 million tweets from the period spanning 2019 March 7-21, from one week before until one week after the attack. We used search terms related to the incident, including hashtags such as #Christchurchmosqueattacks, #NewZealandshooting, and #prayforchristchurch; terms related to Islam, such as “muslims” and “mosque;” and references to known white supremacist ideas, such as “white genocide” and “great replacement.” All together we found 2 million matching tweets in the week before the attack, and 15 million tweets in the week after, with nearly all of the latter being captured in real-time, permitting us to examine even tweets that have since been deleted. The data paint a pessimistic picture of online conversation in the wake of a white supremacist’s mass murder of Muslims, portraying both deliberate attempts to alter that conversation and the inability of social media to halt the spread of disinformation.
Contrary to expectations, the most frequently shared links in this collection of tweets weren’t about the innocent lives lost in the attack or how to help the local community. Instead, articles from Gateway Pundit, Breitbart, and the Christian Post accounted for the three most-shared links. All three focused on a conflict that has existed in Nigeria for decades between the pastoralist herder communities -- the mostly-Christian Hausa and the mostly-Muslim Fulani ethnic groups -- and the less-nomadic farmer communities of the Middle Belt region. Deadly clashes in the last few years have claimed thousands of lives, with tensions between groups being inflamed by the combination of loss of arable land, rising populations, human migratory patterns, and, to some degree, religious differences between the groups.
An article from the Christian Post published on March 14 was the first of the three aforementioned outlets to describe the recent fatal incidents in Nigeria, without reference to any party’s Islamic affiliation. The articles from Breitbart and Gateway Pundit, however, added to the Christian Post article an emphasis on the Fulanis’ religion in an apparent attempt to blame Islamic beliefs for the violence against the Christian groups. For instance, the Breitbart piece interchangeably labels the perpetrators “Fulani jihadists” and “Muslim extremists,” and the Gateway Pundit story focuses on the genocide of Christians by “radical Muslim militants,” while featuring a header image of militants that has been used since at least 2013 by various far-right news sources and has no established connection to the crises in Nigeria.
Chart 1: Number of URL Shares by News Source
National news outlets in the U.S. and U.K. have written hundreds of articles about the Nigerian conflict’s impact in the last two years, as CTS confirmed using the MIT MediaCloud. Additionally, right-leaning outlets such as Gateway Pundit and Breitbart have taken significant interest in these events. They have focused on the religious aspect of the conflict, describing the attacks as a “Christian genocide” committed by the mostly-Muslim Fulani herders. An investigation by Snopes found this perspective to be questionable, especially in Breitbart’s reporting, with Ohio University’s Professor of Political Science Brendon Kendhammer noting that the perpetrators have also committed attacks “where nearly all the farmers and victims are also Muslim.” To get a more comprehensive understanding of media coverage, CTS used the MIT MediaCloud to examine all stories related to the conflict that have been published in online press since the start of 2017, enabling us to see the coverage from different types of outlets.
Though the majority of online articles regarding the violence in Nigeria have come from mainstream sources, the stories shared on Twitter alongside coverage of the Christchurch mosque attacks did not follow the same pattern. Of the 25 links during this two-week period that ostensibly mention Nigeria and were shared more than 100 times, none came from mainstream sources, despite being easy to find through a simple web search. Virtually none of the tweets containing links to such mainstream sources were about Nigeria. By contrast, links from the Gateway Pundit, Breitbart, JihadWatch, BizPacReview, Big League Politics, and other “alternative” sources received hundreds of shares for each of their respective articles. All of the aforementioned outlets published articles regarding the Nigerian farmer-herder conflict less than 48 hours after the attack in Christchurch, when most audiences were first hearing of the horrific number of fatalities. Of the five million distinct accounts in this total dataset, more than 28,000 unique Twitter accounts participated in spreading the top three stories through retweets and their own original content. The connection between these outlets’ articles and attempts to manipulate audiences is not an implicit one either - among the top 10 hashtags associated with each story, all three contained #NewZealand, #NewZealandShooting, and #Christchurch. Such associations point to an intentional disinformation effort.
To investigate the role of botlike Twitter account activity in amplifying this story right after the Christchurch mosque attacks, we applied the peer-reviewed Botometer API, formerly known as “BotOrNot,” to rate the 28,420 unique Twitter accounts that shared these three links[1] on how botlike they are - that is, how likely they are to be automated or semi-automated, with the purpose of appearing like an organic Twitter user. 3,283 (11.6%) of these accounts are more likely than not to be bots, and they collectively tweeted 4,122 links to these three stories, accounting for 11.8% of their total broadcasts. As a result, the data suggest a relatively small number of accounts helped to derail online conversations in the wake of one of the most lethal attacks on the Muslim community in modern history. These assessments of botlike activity should be taken cautiously, but this assessment is likely undercounting the amount of bots, given empirical evidence of Botometer’s conservative estimates for botlike behavior. Perhaps more importantly, it also means the vast majority of the content was spread through apparently organic means, meaning the spread of malicious content cannot simply be reduced to addressing automated social media manipulation.
As a social media company that has addressed disinformation in the past, Twitter has not only the ability to stem the spread of inauthentic content but a public mandate to do so. In a Pew Research Center survey from December 2016, 88% of U.S. adults acknowledge fake news is sowing confusion, and 23% even acknowledge propagating fake news themselves. These proportions are echoed in the ADL’s Online Harassment survey, where approximately 80% of respondents supported social media labeling bots on their platforms. Twitter can address this problem by taking steps to inform users when they retweet or engage with unreliable information, especially if the account is verified or has a high number of followers. Furthermore, Twitter could label accounts that are at least semi-automated by requiring those accounts’ users to identify them as such in account settings or else be subject to potential limitations, such as giving tweets lower priority in reply threads or user timelines for botlike behavior. Such transparency would alleviate concerns about disinformation and allow typical users to make more informed decisions about how they engage with Twitter content.
[1] 625 of the accounts were private, suspended, or otherwise inaccessible at the time of labeling, meaning Botometer could not appraise them. As a result, we analyzed 27,795 of those twitter accounts.