Related Content
New report card reviewing nine platforms gives most poor marks; Twitter and YouTube earn a B-minus
New York, NY, July 30, 2021 … Antisemitic content continues to be a serious problem across social media and major platforms are failing to adequately protect targets. These platforms are not managing antisemitic content effectively and too often fail to respond when hateful content is flagged, according to a new report from ADL (the Anti-Defamation League).
ADL’s Center on Technology and Society surveyed nine social media platforms and graded each on how they addressed antisemitic content flagged to their moderators. ADL’s investigators found examples of anti-Jewish content on all nine of the platforms: Discord, Facebook, Instagram, Reddit, Roblox, TikTok, Twitch, Twitter and YouTube and used ordinary accounts to report content under each platform’s hate policies to see how they would enforce those policies for ordinary users. If no action was taken from these reports, ADL elevated the reports through “trusted flagger” programs of those platforms that have them, which purport to provide faster paths for reporting hateful content. Each platform was then ranked based on their level of responsiveness and their level of transparency in responding.
“Online hate has affected more and more people in recent years, especially those in marginalized communities, despite platforms’ claims to be improving policies, enforcement and transparency,” ADL CEO Jonathan A. Greenblatt said. “Our new research evaluated nine platforms on their ability to mitigate antisemitic content and we were frustrated but unsurprised to see mediocre grades across the board.”
ADL’s “Online Antisemitism Report Card” found:
-
Twitter and YouTube earned the highest overall grades at a middling B-minus. Twitter received higher marks for innovative product changes and a high data accessibility score. YouTube received similar marks for taking action on reports from an “ordinary” user and making product-level efforts to address antisemitism.
-
Reddit and Twitch each received a C. Both platforms have strong policies around hate that include explicit protections for protected groups, but Reddit did not take action on any of the content ADL reported and Twitch does not have a formal trusted flagger program.
-
Facebook/Instagram received a C-minus. Facebook has opaque data accessibility and took no action on most content reported through both official and unofficial channels.
-
TikTok received a C-minus. While the video-sharing platform has made innovative product changes, including enabling users to report up to 100 comments at once and block accounts in bulk, researchers cannot access the platform’s data. And the platform did not take action on content reported through ordinary users’ accounts.
-
Discord received a C-minus for its failure to take action on content reported through ordinary users’ accounts and limited data accessibility. Data accessibility is essential for independent reviewers to measure the prevalence of hate on any given platform and it was also one of the metrics evaluated in the report.
-
Roblox received the worst overall grade, a D-minus, falling short in most categories. Content reported to the platform was neither responded to nor acted on. For example, the platform took no action on an audio file making light of the Holocaust, titled “6 gorillion.” The file is still available to anyone to use as a sound effect.
“These companies keep corrosive content on their platforms because it’s good for their bottom line, even if it contributes to antisemitism, disinformation, hate, racism, and harassment,” Greenblatt said. “It’s past time for tech companies to step up and invest more of their millions in profit to protect the vulnerable communities harmed on their platforms. Platforms have to start putting people over profit.”
This work reinforces platforms’ unwillingness to enforce their policies at scale, a facet of ADL’s REPAIR Plan. The report included a series of recommendations to platforms for improving their grades, such as:
-
Giving third-party researchers greater access to data to be able to evaluate how much hate is on line, who is targeted and whether platform enforcement works;
-
Enforcing policies on antisemitism and hate consistently and at scale, with more human moderators and expanding the development of automated technologies;
-
Providing expert review of content moderation training manuals; and
-
Offering greater transparency at the user level, giving users insights on how decisions are made regarding content moderation.