Article

Unpacking Facebook's Civil Rights Audit's Second Report

facebook mouseover

Related Content

July 02, 2019

While Facebook is working with leading civil society organizations, including ADL, to address civil rights concerns on their platform, their latest report raises more questions than it answers about how Facebook intends to do this. 

Many of the audit committee’s recommendations are not novel but, if implemented, would be valuable steps forward in addressing hate, harassment and radicalization on Facebook — such as blocking for targets of large-scale coordinated harassment and allowing individuals to report improper content in bulk rather than one-by-one. Adjusting the process for how content moderators review hate speech and allowing moderators to specialize in hate speech could also potentially be meaningful changes. We also welcome increasing scrutiny over election interference, which often aims to sow group division and hate, and intentionally suppress voting in a discriminatory, targeted manner.

But the fact that Facebook has not agreed to immediately integrate many of these recommendations or that Facebook’s implementation plans are uncertain is deeply troubling. Facebook is a company with incredible talent and resources. In the past year, it has announced or launched efforts in advertising sales (such as augmented reality ads and video ad metrics), an ambitious cryptocurrency, and in user-tracking research. As a result, we cannot help but read into the lack of commitment and progress on hate and harassment. Is allowing for bulk reporting of complaints harder than starting a new cryptocurrency? Is creating a specialized group of moderators who are experts in hate speech harder than launching a user-tracking research app?

While the recommendations of the civil rights audit are a good start, they are not enough to protect the vulnerable populations who are attacked on Facebook’s platform. In particular, one huge gap in Facebook’s approach is that large private groups are barely moderated. An estimated half of Facebook’s users are using groups.  Users who actively post inciting, violent, harassing and threatening content on these pages are clearly violating Facebook’s community standards. However, Facebook does not use AI or have humans moderate large groups deemed “secret” or “private”— even when these groups are larger than some colleges and universities, consisting of thousands of individuals. Policies like Facebook’s decision to leave large private groups alone directly contribute to the normalization of bigotry and discrimination, whether it’s Border Patrol officers or Alex Jones or police officers. It also makes it extremely easy to orchestrate doxxing and other harassment campaigns. This problem is likely to get more severe given Facebook’s plans to shift the site toward more private communications. Lacking in this discussion is how content moderation would be addressed. Facebook’s role in letting such content be distributed widely and uninhibitedly cannot be disregarded as a byproduct—it is a key part of the problem.

Facebook must do more immediately. It must commit to a timeline for each of these actions. Moreover, Facebook must start “showing” not just “telling” the public how and what it is doing. Without clear, independently verified metrics of success in these and other areas, it is nearly impossible to determine how effective any of these claimed efforts are. Facebook should also build greater oversight efforts, such as creating a Civil Rights Officer and sharing data with external researchers to verify Facebook’s efforts (while protecting user privacy). Facebook should also provide meaningful and externally verified metrics in their transparency reports that allow users to understand the prevalence and targets of hate and discrimination on the platform. Facebook recently released their third report on enforcement of their community standards, but the public is still no closer to answering fundamental questions about the nature of hate on Facebook. How much hate is there on Facebook? What communities are targeted? Are any of the company’s efforts to address this hate actually working?

The civil rights audit is an important start. But it is just that — a start. It is now imperative that Facebook implement these recommendations and more, as well as share meaningful data and updates about the problems and the effectiveness of proposed solutions. Every day that this issue is not prioritized is another day that communities are targeted, people are harassed and hate multiplies.