Article

Graphic Violence Policy, Enforcement, and Features Checklist

various sensitive content warning symbols

Related Resources

White House

Fight Antisemitism

Explore resources and ADL's impact on the National Strategy to Counter Antisemitism.

Since the outbreak of the Israel-Hamas War, ADL has been closely monitoring and addressing the overt antisemitic and Islamophobic hatred online, the spread of misinformation, and the dissemination of content depicting unspeakable violence. One issue we’re paying particularly close attention to is how platforms are handling graphic violence.  

Graphic violence can be disturbing and potentially traumatic to viewers. It raises valid ethical concerns about the dignity of victims whose harm is being broadcast on an international stage. It can also stoke divisiveness and lead to increased identity-based violence. However, when journalists, aid workers, and the public share footage of violence in certain contexts, it often serves as a means to expose atrocities and promote human rights awareness. 

Social media companies must consider several questions when coming up with a policy to manage the proliferation of violent content: how should platforms moderate content that is graphic but not violent (for instance, educational depictions of blood)? Or violence that isn’t graphic? Or content that is both graphic and violent, but also fictional? To what extent should intent and context matter to moderators tasked with these decisions? And under what circumstances should platforms ramp up their moderation efforts in response to election periods or international crises? 

Over the last two months, CTS has been persistent in its calls for stronger rapid response mechanisms amid the Israel-Hamas War. We believe that tech companies should be even more vigilant than usual of graphic violence circulating on their platforms, since the real-world implications for this content are so severe and have the ability to incite violence and endanger others. In these cases, platforms should be diligent about responding to ongoing crises and alerting the public of responsive changes to their policies. We recommend that platforms implement proactive measures to promote user safety, especially during times of war, human rights abuses, and other atrocities taking place in real time and garnering significant media attention. We recommend that platforms review CTS’ recent piece, Livestreaming Violence: What Platforms Should Do, for more information on how platforms can promote safety amid livestreamed violence. 

Different platforms serve different purposes and audiences, and as a result their graphic violence policies may vary. Clearly there is room for platform and moderator discretion in prioritizing the factors above, as well as having a clear-cut policy in place for each of them. This can help ensure that a policy will maintain the crucial balance between the free expression that exposing atrocities requires and the preservation of user safety. 

In light of these complex questions, CTS has developed a checklist for social media platforms. The list includes key considerations and recommendations for a robust graphic violence policy that fosters open discourse and transparency, without glorifying violence or subjecting unwilling users to it. 

Checklist

Accessibility, Clarity, and Transparency 

A platform’s policy on graphic violence (as well as on any other topic) should be presented in a way that is easily accessible to all users, and worded clearly so that users can understand it. To address this, graphic violence policies should include the following, along with a clear method for reporting content that violates them: 

Clear Definition of Graphic Violence: Our policy clearly defines what constitutes graphic violence and makes clear distinctions between acceptable and unacceptable content. 

Prohibition of Graphic Violence: Our policy explicitly states that our platform does not allow graphic violence in any form. 

Exception Clause: Our policy defines exceptions where certain forms of graphic violence may be allowed for purposes such as education, news reporting, or art, with strict criteria (see Standards for Intent and Context, below.) 

Transparency Around Enforcement: 

  • Our policy describes how we deal with specific types of content that violate the graphic violence policy, be it through takedown, limitations on algorithmic amplification, sensitive content flags, or other measures. 

  • Our policy outlines consequences for people who repeatedly violate the policy and delineates when temporary and permanent account suspension applies. 

  • Our policy describes the review process for reported content, including the role of human moderators and automated moderation systems. 

Standards for Intent and Context

An ideal policy should consider both the poster’s intent in sharing graphic content and how a reasonable audience is likely to understand the content. Extremely graphic violent content shared with the intent of generating shock alone should not have a place on social media platforms. To address this, platforms should consider whether their graphic violence policy has the following: 

User Intent Standard: Our policy takes user intent into account and doesn’t allow graphic content designed to harass or intimidate viewers, or to encourage, glorify, or incite violence. 

Reasonable Audience Standard: Our policy takes into account how a reasonable audience will understand a piece of content and does not allow content that can be reasonably understood to be harassment, glorification of violence, or incitement. 

Warnings and Labels 

Platforms should incorporate features that create friction for sensitive content such as warnings, labels for graphic content, and age restrictions. To address this, platforms should consider whether their product features include the following: 

Warning Labels: Our platform allows users to mark their own content as sensitive when they post, and it creates warning labels when our automated detection system or human moderators detect sensitive content.

Age Restrictions: Our platform implements age restrictions to prevent minors from accessing graphic violence content. 

User Controls

Individual users have different sensitivities, and accordingly platforms should always be proactive in their efforts to empower users to create an experience tailored to their needs, especially as it relates to control over the amount of violent content they see. To address this, platforms should consider whether they have the following user controls, and ensure that they’re promoting these features to users through periodic reminder notifications: 

Content Filtering and Categories: Our platform has in place a system by which users can select their preferred amount of exposure to content categories, including violence, nudity, and other sensitive topics. 

Hashtag and Keyword Blocking: Our platform allows users to block content containing sensitive hashtags and keywords of their choice from their user experience. 

Safe Mode: Our platform includes a safe mode feature that limits the visibility of graphic content for users who desire a more sanitized online experience; this feature is easy to opt into or out of upon sign-up and at any point thereafter.