Article

Moderating Content During War: What Online Platforms Should Do Now

social media platform logos
White House

Fight Antisemitism

Explore resources and ADL's impact on the National Strategy to Counter Antisemitism.

People in large numbers are going online to help make sense of the war in Israel and Gaza. Since the start of Hamas’ invasion and massacre of innocent civilians, we have seen a surge of antisemitism, hate, disinformation, misinformation, and propaganda spread on social media and messaging platforms. And we know from experience that this type of content can translate into verbal and physical antisemitic attacks in places far removed from the battlefield.  

At times of heightened conflict, we recognize the incredible challenge of moderating online content—especially in a time of war. But we are also well aware of social media’s ability to exacerbate conflict, spread mis- and disinformation, and promulgate antisemitic tropes and conspiracy theories. Right now, bad actors are glorifying violence, re-traumatizing victims, and inciting hatred, which even leading platforms are failing to staunch.   

Over-moderation can clearly undermine the increasingly important role digital platforms play in documenting human rights abuses in conflicts, condemning atrocities, appealing to the international community for action, and crowdsourcing relief and assistance. But there are steps that all social media companies can take right now to at least help ensure they are not fanning the flames.  

Here are seven things that we call on social media companies to do immediately: 

  1. Surge trust and safety resources (both human and AI) to ensure platforms are enforcing their own rules around hate speech and violence. This includes monitoring for content that might incite violence. Platforms need to be as transparent as possible with users who report violative content, including reversing decisions if over-moderation occurs. Since time is of the essence to both mitigate the spread of violative content and ensure good actors are not silenced, platforms need to fairly and consistently enforce their content moderation policies, and explain how and why they took actions—or why they did not.   
     

  2. Promote trusted, reliable news sources. The problem of finding accurate information, and the potential for bad actors to exploit the situation on social media during a fast-moving crisis, is well-documented. Platforms should label unverified or potentially misleading stories, and direct users to verified sources. This makes it even more critical that companies ensure they prioritizing fact-based journalists reporting on the Middle East. 
     

  3. Employ emergency measures, sometimes known as “break-glass” tools, to prevent an increase of misinformation and violent content surrounding major world events. Platforms clearly have the ability to monitor user activity and act swiftly to throttle problematic content and prevent its spread. During the 2020 US elections, for example, Facebook announced it would proactively moderate content likely to fuel the spread of disinformation or incite violence. This included measures such as limiting the number of messages that a user could forward in a short time period.  
     

  4. Coordinate and share relevant information and trends across platforms to curb the spread of nefarious content. The largest social media platforms are members of the Global Internet Forum to Counter Terrorism (GIFCT) where they cooperate in identifying and taking down terrorism-related content. The GIFCT model shows that companies already work together to combat the worst societal harms. This same model could be used in this situation to communicate trends across platforms and, ultimately, facilitate more effective cross-platform operations.   
     

  5. Build in friction for violent videos and images. We know that real-time content moderation is truly difficult during war. On the one hand, we do not want people to glorify violence, spread misinformation, and potentially exacerbate or incite violence. On the other, it is important to document the very real atrocities carried out by Hamas. Despite these trade-offs, every platform should at least add friction, like adding warning labels, to help curb the potential spread of false content or content intended to incite violence. This can also protect users from content that may be traumatizing and hard to verify in a fast-moving crisis. 
     

  6. Ensure that neither people nor platforms profit from posting violence. Premium subscribers on X may, for example, earn money based on the engagement that their posts receive as one of the perks of subscription. But X’s policies do not allow users to monetize any content that violates its Safety, Authenticity, and Privacy rules. The monetization standards explicitly prohibit monetizing “depictions of excessively violent or graphic content.” YouTube creators who belong to the Partner Program must also follow the platform’s Community Guidelines, which prohibit violent or graphic content including “terrorist attack aftermath … with the intent to shock or disgust viewers.” While there is journalistic and documentary value in sharing visual evidence of the violence in Israel and Palestine, and it isn’t always easy to determine intent, platforms must enforce their policies diligently so that users are not encouraged or incentivized to post gore in an effort to profit from atrocities.
     

  7. Determine legitimate and illegitimate public-interest exceptions. Most major social media platforms have rules that allow otherwise violative content to remain online if it serves the public interest. These rules are what allow important content, such as we have seen coming of Israel showing the raw brutality of Hamas’ invasion, to remain online. However, pure incitement is not in the public interest. For example, yesterday X acknowledged that the Supreme Leader of Iran posted something that violated its policies, and subsequently limited the visibility of the post but kept it accessible, arguing it was in the public interest. The post has garnered over eight million views. Platforms need to balance legitimate public interest with the potential for posts from influential accounts to incite violence. We believe in this case that Ayatollah Khamenei’s post only serves to inflame the potential for further violence and that there is no legitimate reason to keep it accessible.