January 19, 2021
The day after the violent attack on the U.S. Capitol on January 6, ADL called on Twitter to suspend QAnon-supporting accounts because they promoted violence, undermined democracy, and spread misinformation. QAnon, an antisemitic global conspiracy claiming that pedophile rings control world governments, including the United States, played a key role in the assault. On January 8, a day after ADL made its demand, Twitter suspended the accounts of prominent QAnon supporters Michael Flynn and Sidney Powell, as well as others. Mr. Flynn is President Trump’s former National Security Advisor who twice pled guilty about lying to the FBI and was recently pardoned by the president. Ms. Powell is the president’s former lawyer who filed a series of lawsuits contending the 2020 elections were fraudulent. Ms. Powell had over one million followers.
After ADL’s call, Twitter announced the suspension of more than 70,000 accounts dedicated to sharing QAnon content, a response to mounting public pressure that the platform has been a breeding ground for the conspiracy group. Indeed, in just a little over three years, QAnon made its way from fringe 4chan message boards to presidential rallies and an insurrection at the Capitol‚ in no small part due to the activity of its adherents on platforms like Twitter.
Immediately following the suspension of QAnon-related accounts on January 8, the use of QAnon-related hashtags plunged by 73% and is currently 97% lower than it was at the height of the spike that followed the Capitol insurrection. Moreover, 11 days after Twitter suspended QAnon accounts, that frequency remains 93% lower than the pre-insurrection baseline frequency.
To evaluate the effectiveness of Twitter’s recent QAnon policy enforcement changes, we pulled tweets using a list of QAnon-sympathetic hashtags from December 19, 2020 to January 19, 2021. While this list included many of the most prominent hashtags, it is important to note that it is not comprehensive and that new QAnon-sympathetic hashtags continue to crop up and become popular over time. We calculated the moving average of the tweet frequency per 4-hour interval to visualize the noisy data better. A baseline for QAnon-sympathetic hashtags was derived using the mean hashtag frequency during the period (shown in green) from December 19 to January 5, the morning of the U.S. Senatorial election runoff in Georgia. The percent increase or decrease from this baseline is shown in the figure above.
Both the persistently high volume of QAnon hashtags and the spike around the insurrection could have—and should have— prompted quicker, more proactive action by Twitter in stemming the virulent spread of QAnon.
In the roughly two-and-a-half weeks preceding the Capitol insurrection, we found at least 24,700 tweets that were easily identifiable as sympathetic to QAnon, a number large enough to have merited attention from Twitter. We found an additional 7,200 following the beginning of Twitter’s January 8 action. Furthermore, many of these tweets originated from high-visibility accounts with many followers and the power to shape online conversations. This data contains 656 accounts with more than 10,000 followers that used QAnon-sympathetic hashtags; 240 of these accounts publicly supported QAnon on repeated occasions. Combined, the accounts in this dataset had roughly 38.6 million followers.
We have seen more enforcement of Twitter’s policies against extremist content in the past few days than we saw over multiple years. And the results thus far are striking. Clearly, Twitter can successfully act on violative content when moved to do so.
The data also illustrates the platform’s alarming lack of consistency. Last July, for example, Twitter announced it would permanently suspend QAnon accounts. The company subsequently claimed that QAnon-related content and accounts dropped by more than 50% as a result, and the platform also announced additions to its coordinated harmful activity policy. A drop of more than 50% sounded good, but considering there were prominent accounts left untouched, it was insufficient. Twitter did not enforce its QAnon policy with fidelity; the data in the lead-up to the attack on the Capitol provides evidence that Twitter’s actions fell far short.
It is one thing for tech companies to enforce policies in response to a crisis, but it is another to enforce policies over time once the emergency subsides. The attack on January 6 and other extremist actions are the predictable result of the longstanding spread of extremist conspiracy theories and incitement to violence. To be effective at content moderation, platforms need to enforce their policies consistently and at scale, and not sporadically in reaction to crises.
Twitter and other social media platforms must not wait until a dangerous conspiracy becomes widespread, not to mention bursts into violence, to take action. When a platform announces a policy, the company must be prepared to follow through with it and to be held accountable. By the time Twitter finally started removing QAnon-supporting accounts on January 8, the consequences of misinformation had already become frighteningly real as we watched a lethal assault on the nation’s Capitol.