May 13, 2019
By Ahmad Sultan | Associated Director for Research, Advocacy and Technology Policy, ADL's Center for Technology and Society
Livestreaming, the act of broadcasting an event from your device to a social media platform in real-time, has been used for activism across the world, giving a platform to marginalized people, and capturing social inequities and government injustices like the murder of Philando Castle. It has played a role in bringing communities together through the livestreaming of shared interests like video games, political discourse, or the merging of both as expertly illustrated by Alexandria Occasio-Cortez. But it has also been used to broadcast content that can cause deep emotional pain and spread terror, from livestreams of self-harm and suicide to the broadcasting of the murder of 50 Muslims in two mosques in New Zealand.
And last month, on the last day of Passover, a 19-year-old opened fire at a Synagogue in Poway, California, killing one woman and leaving three other people injured. Before he entered the Synagogue, the shooter allegedly shared an anti-Semitic manifesto on an online message board, claiming to have previously set fire to a nearby Mosque and writing that he would be using Facebook to livestream his attack on the Synagogue. Ultimately, the livestream didn’t materialize, reportedly because the attacker couldn’t figure out how to use the feature in time.
So, how do we benefit from the positive uses of these new technologies while simultaneously reducing its potential for abuse? Are we expected to just cross our fingers and hope that a would-be live-streamer terrorist can’t figure out the technology?
One answer is to incorporate anti-abuse and anti-hate concepts into the earliest stages of the design process, rather than deal with these issues as an ancillary task after the fact. Today, the most popular method of developing tech tools is through a Software Prototyping approach: an industry-wide standard that prompts companies to quickly release a product or feature and iterate on it over time. This approach completely devalues the impact of unintended design consequences. Yet, the serious consideration of design vulnerabilities from the start of product development is not a new concept: Privacy-by-Design is a popular framework where data privacy considerations are built into the design, operation, and management of a given system, business process, or specification. Security-by-Design is a similar approach where software is designed from the ground up to be secure. It’s high time that companies employ an adapted approach to product design in the fight against online hate. For now, let’s call it Anti-Hate-by-Design.
As macabre as it may be, it’s important to understand why livestreaming on social media is an appealing tool for individuals looking to broadcast their crimes—even when it clearly incriminates them. Historically, crimes have been predominantly secretive affairs, seeking to hide the identity of the offender, and have been committed away from the public eye. This has left traditional media to broadcast information about the events on their own terms. Livestreaming has disrupted the equation by allowing people to gain control of the criminal narrative. Without the ability to livestream, people would have to take the additional steps of recording and then uploading the video of a crime they committed, which ultimately might never materialize.
Raymond Surette, professor of criminal justice at the University of Central Florida, refers to this phenomenon as performance crime. In his paper “Performance Crime and Justice,” Surette writes that the broadcasting of terrorist activity allows for the global recruitment and dissemination of propaganda that can be tailored for both near and distant audiences, victims, enemies and supporters. And according to media psychologist Pamela Rutledge, livestreaming on social media is attractive to those looking to commit anti-social activity because it provides a larger audience that fills the personal need of self-aggrandizement.
Stated simply, livestreaming in its current form allows terrorists to recruit and shock others, all while providing the offender with a sense of self-importance. In recognizing these dynamics, an Anti-Hate-by-Design approach could have addressed concerns surrounding the dichotomous use of livestreaming. While it’s good to know that social media companies continue to invest in improving AI powered content moderation systems, AI is not a silver-bullet for the livestreaming moderation dilemma. We were recently reminded of this when Facebook’s automatic detection systems failed during the Christchurch livestream.
What can be done about livestreaming to keep the good and curtail the evil? Below is a non-exhaustive list of solutions for livestreaming, which respect the socio-political transformative nature of the technology but also address its potential for abuse:
- Gradual Access: Platforms could provide gradually increasing access to streamers. For example, first time streamers can only livestream to a limited number of friends. The audience-limit opens up with each tier based on usage frequency and adherence to community standards, culminating with public livestreams to verified streamers. This would limit the reach of first-time streamers who are engaging in performance-crime.
- Prevent livestream links to and from problematic websites: The platform should instantly redirect the user away from the livestream if it detects that the user accessed the URL link on a website like Gab or 8chan, platforms that consistently radicalize, recruit and promote white supremacists (including those charged with the Christchurch and Pittsburgh massacres). Also, URL links to a problematic website on a social media platform could be removed. This way, large social media platforms like Facebook and Twitter cannot be used to enhance the spread of hate through smaller websites like Gab and 8chan.
- Require more user-input to continue livestream: A popup notification, asking if the user is still streaming, should appear every few minutes, prompting the user to interact with their device. This is important as people engaging in performance-crime will be forced to interact with their device while livestreaming, making the option of livestreaming less attractive.
- Delay for viewers: Livestreams, unless initiated by a verified user, will experience a few minutes delay from broadcast to the audience. This would provide a window for AI powered content moderation systems to shut down a livestream before it reaches a broader audience.
The above are just some solutions that could have been developed through an Anti-Hate-by-Design framework. At the heart of the issue is the current design approach of Software Prototyping. This release-now-fix-later attitude results in ill-considered and imprudent design features that can have serious consequences for society. While social media companies can fix a product over time through software updates, they can’t heal the emotional pain their products sometimes wreak on users. Companies need to conduct a thoughtful design process that puts their users first and incorporates society’s concerns before, and not after, tragedy strikes.