November 20, 2020
The latest Facebook Transparency Report, which covers July 2020 to September 2020 (Q3 2020), raises some alarming concerns around the prevalence of hate speech on the platform, leaving ADL with several questions that demand clear answers. ADL published a blog post about the opaque nature of Facebook’s transparency reports in January 2019. Many of the same problems that existed in the 2019 reports continue in Facebook’s most recent transparency efforts.
Yesterday’s report states that the prevalence of hate speech, as defined by Facebook, is 0.10 - 0.11 percent, meaning that “of every 10,000 content views, an estimate of 10 - 11 would contain hate speech.” Assuming a user views 5,000* pieces of content in a week, which might not be unusual for a regular daily user of the platform, a person can be exposed to around 5 pieces of hate speech content a week, more than 20 pieces a month and more than 250 pieces within a year. These metrics, high as they may seem, likely understate the problem when it comes to users who are more likely to become radicalized or further spread hate, because they do not take into account the fact that users are fed tailored content depending on a multitude of variables such as their online browsing activity. Algorithms can amplify exposure to hate speech for users who are already exposed to or are actively seeking hateful content.
Even without taking into account users who are more likely to continue to spread hate, incite bigotry and sow division, these numbers are stark enough. After all, 1.82 billion people on average log on to Facebook daily, as reported in the company’s third-quarter earnings for 2020. Weekly exposure to hate speech thus could be as high as 10 billion views, monthly exposure, 40 billion views, and yearly exposure, 520 billion views. These are startling numbers and they demand an urgent response. Facebook needs to report users’ cumulative exposure to hate speech on its platform.
Source: Facebook Transparency Report, Q3 2020
Moreover, while the figures for “hate speech actioned” in Q2 (22.5 million) and Q3 (22.1 million) were much higher than Q1 figures (9.5 million), ADL is perplexed by the sudden decrease in pieces of content appealed after they were actioned. In Q1, Facebook saw 1.2 million pieces of content appeals, while in Q2 and Q3, the company only saw 29,900 and 41,000 pieces of content appeals, respectively. Given that total actions increased from Q1 to Q2 and Q3, we would have expected to see appeals scale up with actions. Facebook should also report on the nature of after-appeal “actions”, i.e., how many pieces of content the platform removed, labeled, or de-amplified after appeal, but also overall as well.
The report also shows a dramatic increase in content restored by FB without an appeal, from only 59,100 pieces of content in Q1, to 145,800 and 232,400 in Q2 and Q3, respectively. ADL wants more transparency on why Facebook restored content that it first actioned even though that initial action had not been appealed. Furthermore, ADL wants to know the number of non-appealed actions by AI and humans content reviewers respectively. We are mindful that the country was heading into an election during Q3, raising questions as to whether and how those dynamics played into Facebook’s decisions.
Source: Facebook Transparency Report, Q3 2020
Facebook states that it removed 22.1 million pieces of hate speech content in Q3, which is marginally less than the 22.5 million pieces of hate speech content actioned in Q2. Given the company’s recent slate of announcements on policy changes requiring new enforcement and increased content moderation resources, we would have expected the number of pieces of content actioned in Q3 to be higher than in Q2. But the amount of hate speech actioned remained at the same levels as Q2, making ADL question the efficacy of Facebook’s processes to combat such content and the degree to which the new policies were actually enforced. While it is encouraging to see that Facebook actioned 94.7 percent of content before users reported it, the public still does not know how many pieces of content users flagged for Facebook. This is important to know because we have already seen obvious forms of hate speech on Facebook left on the platform even after flagging.
Source: Facebook Transparency Report, Q3 2020
The company also showed a significant increase in hate speech actioned on Instagram, from 3.2 million pieces of content in Q2, to 6.5 million pieces of hate speech content in Q3. Again, Facebook needs to explain the dramatic changes on its platform, share total numbers of content flagged and provide a roadmap for how it intends to prevent hate speech from further proliferating. The increase in the number of actions from Q2 to Q3 could be related to the new election-related policies and enforcement processes that the company announced. ADL wants to know why the same increases are not seen on Facebook even though the same new policies applied across both platforms.
Source: Instagram Transparency Report, Q3 2020
Finally, as ADL has long demanded, Facebook needs to report on the prevalence of hate speech targeting specific communities, the experiences that distinct groups are having on its platform and the numbers for the different kinds of hate being spread. For example, how many antisemitic, anti-Black, anti-Muslim and anti-LGBTQ+ pieces of content required actioning? Without specifying these numbers and the types of content attacking each vulnerable group, it is difficult for civil rights groups to propose solutions to these problems. Facebook can follow the example set by Reddit by conducting a study on hate and abuse on its platform and making its findings public. The company should also conduct another independent audit, specifically focused on its lack of transparency.
Greater transparency and active data collection around online hate speech should be accompanied by evidence-based policies and enforcement mechanisms. To show they are taking real steps to reduce hate speech, platforms must try to understand the scope of the problem by collecting the relevant data and using rigorous research methods. Failure to do so will result in vulnerable groups continuing to be at the mercy of toxic users on social media.
*The authors of this report reached out to Facebook to verify if the assumption that a daily user comes across 5,000 pieces of content a week is accurate. Facebook has yet to respond.