Article

As War Rages, X Must Curb the Spread of Misinformation and Hate

backlit computer keyboard
White House

Fight Antisemitism

Explore resources and ADL's impact on the National Strategy to Counter Antisemitism.

Platforms bear a particular responsibility to moderate content in a time of war, such as we are seeing currently in Israel and Gaza. However, bad actors are taking advantage of the tragic situation to fan the flames. This is not only retraumatizing people who are the victims of shocking barbarism, but also exposing others far removed from the Middle East to an increased risk of physical violence and other abuse.  

On October 9, in a welcome statement, X posted about the safety measures it is taking with respect to the violence in Israel and Gaza specifically. It stated its approach includes proactively monitoring for antisemitic content and coordinating with the Global Internet Forum to Counter Terrorism to identify and remove Hamas-affiliated accounts and other terrorist content. Regarding misinformation, X has said that it has removed several hundred accounts that were manipulating its trending topics feature and is promoting its Community Notes function as a way of rapidly responding with fact-checks via crowdsourcing. 

We appreciate this statement and sincerely hope that X’s efforts will help combat the spread of misinformation that has the potential to incite even more violence. However, we have thoughts about what else X can do in this fight.  

While we are writing about X here, we note that all social media companies are going to be severely tested in the days and weeks ahead, as rhetoric continues to heat up; as threats to Jews in the U.S. and around the world escalate; as white supremacists revel in what’s happening and exploit the situation to further spread hate—and potentially even violence. 

We recommend that X take the following steps to better address these issues: 

1: Create misinformation reporting category, especially for use with X’s crisis misinformation policy  

In September, X removed the political misinformation option on its reporting form.  Community Notes is not a sufficient tool itself to prevent the spread of misinformation and may be prone to manipulation. Now users have no way of directly alerting X about misinformation or disinformation content, political or otherwise, as no other option on Twitter’s reporting mechanism indicates this category of content. X does have a stated “crisis misinformation policy” intended to apply to violent international conflicts, such as the current war in Israel. But without a reporting option, it is unclear how a user is meant to notify the platform about content that might violate this policy.  

Recommendation: X should create more robust, formal tools for flagging, labeling, and deprioritizing misinformation.  X also should consider even temporarily reinstating this reporting option and broadening it to allow users to report content that may fall afoul of its crisis misinformation policy. 

2: Modify the verification system to allow users to identify trusted news sources in times of crisis   

On October 9th, our team identified a bot account with a blue check spreading a fake video claiming to be Hamas executing Israeli hostages. As of this writing, this fake video had upwards of 170,000 views with no disclaimers applied to the content. And per the issue mentioned in the above recommendation, there is not even an option for users to flag this post to the company for mis- or disinformation.

The reality is that bad actors and bots are abusing the blue check system to spread misinformation and to provide a platform for terrorism.  

Recommendation: X should modify its verification system to provide a platform for trusted sources of news and not reward those accounts who are spreading misinformation or are bots, even if they have a blue check.  

3: Reimplement state-sponsored media labels  

In April 2023, X removed labels from state-sponsored media accounts. X’s Help Center still explains how these labels provided contextual information about accounts controlled, at least in part, by official government representatives. This contextual information helped users interpret posts from questionable sources. 

The Tasnim News Agency English, for example, is an Iranian state-affiliated outlet that is currently sharing unverified information about Israel and Palestine that supports Iran’s position. However, it is not labeled as such on X, making it harder for the average user to determine whether its posts are government propaganda.  

Recommendation: X should reintroduce the state-sponsored media labels it removed earlier this year. Knowing the content that is provided by government sources helps identify propaganda and places the post in proper context.  

4: Undo changes to remove headlines from news content  

The recent change to remove headlines from linked articles allows users to attach any headline they like to news links they post, creating more opportunities for confusion and misinformation. News links in headlines have been changed to just photos, making them largely indistinguishable from other photos.  

Recommendation: Return the headlines to linked articles.