New York, NY, September 30, 2024 … A new scorecard released today by ADL (Anti-Defamation League) found that five major social media platforms – Facebook, Instagram, TikTok, YouTube and X – routinely fail to act on antisemitic hate reported through regular channels available to users.
The ADL Center for Technology and Society (CTS) evaluated both policy and enforcement around antisemitism and issued grades to each of the five platforms based on a fixed set of criteria.
X (formerly Twitter) scored the lowest with an “F” – as most of the platform’s actions consisted in limiting the problematic content’s visibility. While ADL credited X for taking action, X did not, like other platforms, remove the hateful content. Additionally, Facebook and TikTok each received a “C”; YouTube and Instagram each earned a “C-minus.”
“Social media platforms are still falling far too short when it comes to moderating antisemitic and anti-Israel content,” said Jonathan A. Greenblatt, ADL CEO and National Director. “In the aftermath of the unprecedented Hamas attack on Israel on October 7, Jewish users are encountering more antisemitic slurs and harassment on social media than ever before. It’s not hard to detect this hate, but it takes leadership to consistently enforce the rules.”
ADL researchers tested how well the five platforms enforced their policies against hateful content in two specific areas: antisemitic conspiracy theories, and the term “Zionist” used as a slur. The scorecard found that while all five platforms have appropriate policies in place to respond to such hateful rhetoric (except for X, which does not have a policy specifically prohibiting misinformation), they routinely fell short on enforcement for average users.
The scorecard also found that most platforms only took action when ADL escalated the reports through direct channels unavailable to regular users. And even then, platforms only acted on some of the hateful content.
In the year since Oct. 7, ADL has documented a dramatic increase in antisemitic hate online. ADL’s annual report on online hate and harassment found that 47 percent of Jews saw antisemitic content or conspiracy theories related to the Israeli-Hamas war, compared to 29 percent of Americans overall. The hateful language has in many cases shifted, however, and new antisemitic conspiracy theories have taken shape alongside the resurfacing of old ones.
“We have published several studies evaluating how platforms respond to antisemitism when reported by average users. Disturbingly, this scorecard had the worst response rate for average user reporting we’ve ever seen,” said Daniel Kelley, Interim Director of the ADL Center for Technology and Society. “Though we are encouraged by the actions platforms took when we reported through our ‘trusted flagger’ channels, we urge tech companies to conduct comprehensive audits of how their platforms moderate antisemitic content to understand why this deeply concerning gap exists between platform policy and user reporting.”
Methodology
To test each platform’s enforcement of its antisemitism policies, ADL first reported individual pieces of content using tools available to a regular user. After a week, researchers checked to see whether any action had been taken. For content where no action or partial action (such as limiting the content’s visibility) was taken, ADL escalated the content to direct points of contact at the companies, most often via ‘trusted flagger’ partnerships.
ADL also evaluated each platform’s policies on antisemitic conspiracy theories and slurs. When comparing policies to actual content, nuances often arise, making case-by-case determinations necessary for a large share of potentially violative content. Each piece of content escalated to the platforms’ trusted flaggers was evaluated by ADL’s internal experts on the platforms’ policies.
Following the trusted flaggers’ reviews of the remaining content, ADL evaluated each platform on its rate of action against content reported as a regular user and its cumulative rate of action after trusted flagger escalations.
Recommendations for Platforms
- Improve user reporting. Although we recognize the challenges of moderating at scale, these platforms should be able to ingest reports, review content, and act on clearly hateful content in less than a week.
- Fix the gap between policy and enforcement. Companies should begin by conducting an internal audit to assess 1) whether staff are prepared to recognize and act on antisemitic content; 2) whether current policies address the multidimensional nature of antisemitism; 3) their enforcement capacity and effectiveness; and 4) support mechanisms for users who encounter or are targeted by antisemitic hate and harassment.
- Provide independent researchers with data access, including access to archives of moderated content.
- Review reported content in context. Multiple reports or other signals on a single piece of content may provide context for content that otherwise does not appear violative.
- Follow emerging trends and adversarial shifts as they shift over time. These changes have only accelerated for antisemitic hate since Oct. 7.
ADL is the leading anti-hate organization in the world. Founded in 1913, its timeless mission is “to stop the defamation of the Jewish people and to secure justice and fair treatment to all.” Today, ADL continues to fight all forms of antisemitism and bias, using innovation and partnerships to drive impact. A global leader in combating antisemitism, countering extremism and battling bigotry wherever and whenever it happens, ADL works to protect democracy and ensure a just and inclusive society for all. More at www.adl.org.