Article

Livestreaming Violence: What Platforms Should Do

Livestreaming-Violence-Primary

Getty Images

Related Resources

As Hamas terrorists are now threatening to livestream executions of Israeli hostages, tech companies must yet again ensure their platforms cannot be weaponized to exacerbate violence, spread mis- and disinformation, and perpetuate trauma and emotional harm.  

Livestreaming, the act of broadcasting an event from your device to a social media platform in real-time, has been used for activism across the world, giving a platform to marginalized people, bringing communities together, and broadcasting social inequities and human rights abuses. But it has also been used to broadcast content that can cause deep emotional pain and spread terror, from livestreams of self-harm and suicide to the broadcasting of the murder of 50 Muslims in two mosques in New Zealand. Times of crisis underline the urgency for platforms to enforce their terms of service vigorously, ensure they have adequate moderation staff, and act quickly to stem the flow of livestreamed terrorist propaganda and violence. 

Livestreaming has been used by violent extremists, such as the Christchurch shooter and the January 6th insurrectionists, to amplify graphic, hateful attacks in real-time and at scale. Such content is often amplified by social media’s emotion-based algorithms, with the potential to inflict further harm on victims’ families and communities.  

Most mainstream platforms have policies prohibiting graphic violent content. But these same platforms also recognize that at times, it is important to be able to share such content.  This could be to raise awareness of important newsworthy events, educate people, or document atrocities. However, there are certain red lines that need to be foundational: 

  • Posting violent content for shock value or to glorify violence should be strictly prohibited. Social media platforms should seek a balance to ensure that users exposing atrocities and speaking out are also providing context and are not inciting (further) violence or using violent content to cause further trauma. 
  • Some content should still be prohibited, like sexual assault or violence against children, and videos taken or uploaded by the perpetrators (YouTube and TikTok have these provisions in place). 
  • Users must have control over what violent content they see and how. This should include sensitive content settings, and age restrictions for some content for users under 18. More generally, sensitive content warnings and labels are important to ensure users are not harmed by encountering violent content unintentionally. 
  • Live-streaming from terrorist organizations or other entities categorized by platforms as “dangerous organizations or individuals” who fall under platform policies related to promoting violence, should be strictly prohibited. 

Adding friction into the process, especially during heightened times of conflict, is one of the most important tools for moderating livestream content. Platforms must immediately remove livestreamed videos, as well as videos uploaded after the fact, that violate the rules, or at least quarantine them for review and possible reinstatement. In the present moment, our other crucial recommendations, described below, include rules such as earning gradual access, prohibiting livestream links to and from fringe sites, and delays to allow platforms time to detect and remove violative content. 

Context is especially important right now, and comments and captions have historically been overlooked in moderation tooling. This loophole allows bad actors to elevate hate, harassment, extremism, or incitement on otherwise innocuous images and videos.  

We continue to urge the implementation of anti-hate-by-design principles to prevent harmful livestreaming of extremist violence. After January 6, and in the years that followed, some important and positive steps were taken by tech companies to reduce the potential for abuse of their platforms by extremists. Unfortunately, the recent, massive layoffs in the tech industry have disproportionately impacted moderation staff across major platforms.

Post-insurrection Progress  

In the wake of the January 6 insurrection, multiple technologists observed that livestreaming had become a tool to broadcast hate and an exercise in branding for white nationalists. Beyond elevating their profiles, extremists can unlock monetization: Greater violence fuels more views that fuel payment, thereby fueling still greater violence. A cycle of hate-for-profit. 

YouTube changed its livestreaming policy in the immediate aftermath of the insurrection so that users must have at least 1,000 channel subscribers to livestream on mobile. As of late 2023, that number had been quietly lowered to just 50. Users who have between 50 and 1,000 subscribers can still livestream on mobile. YouTube’s current policy alludes to measures taken to protect the community from harmful content, such as limiting viewers for users with fewer than 1,000 subscribers, requiring channel verification, and livestreaming accounts having no “restrictions” (strikes for violating community guidelines, commercial guidelines, or copyrights) within the past 90 days. This constitutes a large backwards step from the bright-line rule that users needed at least 1,000 subscribers to live stream on mobile—which itself was arguably too low a threshold.  

Amazon’s platform Twitch (which focuses on livestreaming for gaming) publishes transparency reports, but after promising to expand moderation staff in 2021, it carried out large layoffs of trust and safety staff in 2023.  It also announced in 2021 that it would create an advisory council of experienced users, online safety experts and anti-bullying advocates to help improve safety on the site. As of 2023, there has been no further news on the council or any recommendations it may have submitted. 

Facebook, now a product of Meta Platforms, announced in May 2019 that it would apply a “one strike” policy to its Live function. While the press release announcing this change is still live in 2023, a review of current Facebook Live policies shows no mention of a one-strike policy. While Meta Platforms does publish guidance on dangerous organizations and individuals, violence and incitement, and promoting crime, none of those policies mention livestreaming penalties or Facebook Live.  

However, we commend Meta Platforms’ specific response to the crisis in Israel, in which they do link back to the announcement of the one-strike policy. We are hopeful that this means the rule remains in force, and we appreciate their efforts to remove or mark as disturbing, by their estimation, “more than 795,000 pieces of content […] in Hebrew and Arabic” for being violative of community standards around violence, extremism, etc. in the 3 days following the October 7 Hamas attack.

How to Build Anti-Hate-by-Design Livestreaming Services

The Center for Technology & Society has repeatedly advanced a set of product policy recommendations to prevent extremists from abusing livestreaming to further causes that are clearly in violation of platforms’ dangerous individuals/organizations policies.  

The recommendations below could help stem the abuses of livestreaming, eliminate the profit motive for extremist activity, and better moderate hateful content. 

  • Only users who reach certain benchmarks can have a more frictionless ability to livestream. Users should earn the privilege of livestreaming without friction, especially on mobile.  On most platforms this entails verification, or at least the provision of personal information to livestreaming platforms, as well as reaching some follower threshold. While it is possible for hatemongers to reach these benchmarks as well, it is still critical that platforms leverage these benchmarks to exercise greater oversight when violative content is live-streamed. With these rules, individuals can be held accountable rather than cloaking themselves in anonymity.  

  • Awarding gradual access. First-time streamers, or relatively new streamers, have limits placed on the number of viewers who can watch their videos. Their numbers can be increased if they have a record of obeying a platform’s community guidelines. 

  • Prevent livestream links to and from problematic websites. A user who accesses a livestreaming link from a website known for being lenient on white supremacists and other extremists should be directed away from the livestream link as soon as the livestreaming platform recognizes such traffic. 

  • Require more product engagement to continue livestreaming. Users who are livestreaming should be prompted with a pop-up that requires activation every 10 minutes in order to continue streaming—a difficult ask for someone engaging in violent behavior.  

  • Delay for viewers. If the user is not verified there should be a significant delay in the broadcast. This can provide moderators and AI tools enough time to act should the content violates terms of service or stream unlawful behavior. 

  • More investment in tools that better capture violative livestreaming content. It should not take a massacre to weed out bad actors on a platform. Content moderation must be a continuous effort, not something done in quick response to a national tragedy.