Article

How Platforms Can Stem Abuses of Livestreaming After the Storming of the Capitol

platforms

January 15, 2021

A livestream video shot by a newly-elected West Virginia lawmaker shows a disturbing gap between what a mob of Trump supporters who attacked the Capitol thought about the events of January 6 and what the rest of the world actually saw.

Derrick Evans, a freshman member of the West Virginia House of Delegates, viewed fellow rioters swarming around him and forcing entry into the Capitol as a celebration. He was visibly giddy. But for horror-struck viewers, it was a violent act of sedition that led to the deaths of five people, one of the most shameful days in American history. Mr. Evans was later arrested and resigned from his post. He faces federal charges for his role in the attack.

Livestreaming has become a tool to broadcast hate and, as aptly described by Ben Smith in his latest column for The New York Times, an exercise in branding for white nationalists. Not only can extremists draw attention to themselves, but they can also earn money by airing their exploits for their followers, creating a feedback loop that rewards hate.

On the livestreaming site Dlive, Tim Gionet, a white nationalist who uses the alias “Baked Alaska”, earned $2,000 for broadcasting himself entering Congressional offices for his more than 16,000 fans. Mr. Gionet has shown up on the white supremacy radar screen multiple times; he was also at the Unite the Right white supremacist rally in Charlottesville in 2017, an event that resulted in the death of Heather Heyer.

The January 6 insurrection money was given to Mr. Gionet via “lemons,” a currency Dlive uses that allows followers to tip streamers and which can then be converted into actual money. More than 150,000 people watched Dlive on January 6, and more than 95% of those views went to far-right streamers. Dlive has since suspended the Baked Alaska channel.

Platforms have done too little to moderate the growing problem of extremist livestream content. Instead, companies are responding reactively, as opposed to creating proactive, sensible policies that prevent extremists from broadcasting hate in the first place.

In the wake of the 2019 Christchurch, New Zealand mosque shootings, which the killer initially broadcasted on Facebook, and the synagogue shootings in Poway, California, and Halle, Germany, ADL recommended various ways platforms can curb the potential abuses of livestreaming. What have companies done since then?

In response to mounting public pressure after these shootings, YouTube changed its livestreaming policy so that users must have at least 1,000 channel subscribers to livestream on mobile. Users who have fewer subscribers can still livestream through a computer and webcam. But this begs the question of why those with smaller subscription bases are still allowed to use computers and webcams. And while the policy of having a minimum number of subscribers to livestream is a useful one, it should be applied across all devices, not just mobile.

Moreover, although the general policy of having a minimum number of subscribers to livestream is a useful one, imposing the low threshold of 1,000 channel subscribers to livestream does not address performance crime. Any demagogue can easily amass that number.

For its part, Facebook announced on May 14, 2019, that it would apply a “one strike” policy to its Live function. “From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time—for example 30 days—starting on their first offense,” the company posted. “For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.” Facebook’s vague commitment to restricting people who violate their policies from using Live for “set periods of time” is another example of its often opaque community guidelines. Furthermore, a 30-day suspension from Live is a slap on the wrist, especially if someone violates Facebook’s “most serious policies.”  

Last year, Amazon’s livestreaming platform Twitch announced it had doubled its safety operations team's size and added new tools to assist its volunteer moderators after the Halle, Germany killer used the platform to stream his crimes. It also announced that it would create an advisory council of experienced users, online safety experts and anti-bullying advocates to help improve safety on the site, and it has. But this hardly constitutes a policy. We also have no idea how many platform-wide or community moderators Twitch has, what kind of training they undergo, or how many hours per week they work. How does Twitch determine whether, in light of these changes, the platform is safer? What data does it track? Twitch is the only major platform that has yet to issue any transparency report. Its policy on livestreaming, or lack thereof, raises more questions than it supplies answers.

On January 9, Dlive announced that it had suspended the accounts of seven far-right streamers and planned to refund the lemons donated to these channels on or after January 6. These “donors” do not deserve to get their money back and they should be suspended from Dlive for financing hate. These followers generated demand for content related to the violent storming of the Capitol and established incentives for the people behind the livestreaming channels to engage in extremist activity. Playing whack-a-mole with bad actors or individual livestreams does not get at the fact that Dlive has created an ecosystem in which extremists jockey for attention in part because it lines their pockets. Dlive, it should be noted, takes a 25% cut of every “donation.” As evidenced by the high traffic it saw last week, its profits spike with the influx of far-right channels and their fans. As we’ve seen over and over, facilitating and spreading extremist content on a platform is a part of the business model -- a feature, not a bug.

Below is a list of recommendations to stem the abuses of livestreaming, eliminate the profit motive for extremist activity, and better moderate hateful content.

  • Only verified users can livestream. Users who provide personal information to livestreaming platforms cannot cloak themselves under anonymity and can be held accountable. 
  • Awarding gradual access. First-time streamers, or relatively new streamers, have limits placed on the number of viewers who can watch their videos. Their numbers can be increased if they have a record of obeying a platform’s community guidelines.
  • Prevent livestream links to and from problematic websites. A user who accesses a livestreaming link from a website known for being lenient on white supremacists and other extremists should be directed away from the livestream link once the livestreaming platform recognizes the website user is being linked through.
  • Require more product engagement to continue livestreaming. Users who are livestreaming should be prompted with a pop up that they have to click every 10 minutes that allows them to continue streaming. This forces the livestreamer to be more involved with the stream, which would be difficult to do if they are actively engaged in criminal activity.
  • Delay for viewers. If the user is not verified, there should be a significant delay in the broadcast. This can provide moderators and AI tools enough time to act should the content violate terms of service or stream unlawful behavior.
  • More investment in tools that better capture violative livestreaming content. It should not take a murderous riot to weed out bad actors on a platform. Content moderation must be a continuous effort, not something done in quick response to a national tragedy.

Extremists livestream hate crimes because it nets them notoriety, control over their narratives, and money. Policies moderating livestreaming must get at these motivations instead of implementing superficial punishments that merely place a bandage over the problem. Hate and harassment are on the rise in America, and with this, companies must be more vigilant in making their platforms safer for everyone.