Article

What Will Finally Be the Tipping Point Against Facebook?

What Will Finally Be the Tipping Point Against Facebook?

September 27, 2021

The recent blockbuster series of five articles from The Wall Street Journal exposing Facebook’s complicity in spreading toxic content underscores that it is far past time to recognize the harm caused by Facebook. Yet the company and other social media platforms continue to enjoy free rein despite playing outsized and destabilizing roles in determining what content is served up to billions of individuals worldwide.

The “Facebook Files” were based on internal documents, often obtained from company employees, which acknowledged the massive harm caused by the platform and its equally abject failure to fix the problems. We already knew about Facebook’s deceptive, misleading, and inconsistent approach to content moderation. However, the articles offered still more proof—and more insider confirmation—that the company was not only aware of the dangers posed to its users, but worked to bury the troubling findings of its own researchers.

As many have pointed out, freedom of speech is not the same as freedom of reach. We have learned over and over again that we cannot trust the tech sector to self-regulate.

Repeatedly, these companies—and Facebook in particular—have been shown to play key roles in undermining democracy, casting doubts on research-backed science, spreading human rights abuses, exacerbating angry echo chambers that deepen political rifts, and fomenting mental health crises.

Facebook has consistently blocked or sought to undermine non-governmental attempts to independently assess the prevalence of disinformation and other harmful content on its sites. For example, Facebook dismissed widely-accepted scholarly research linking social media use to increased rates of depression as “inconclusive.” In this regard, Facebook is not unlike the tobacco company Phillip Morris, the asbestos manufacturer Manville (formerly Johns-Manville), or the pharmaceutical giant Purdue Pharma. Like Facebook, these companies already possessed internal research showing that their products were harmful, but buried those findings, disputed the work or credibility of outside experts that they knew to be accurate, and put out information designed to create confusion or specious debate. Ultimately, all these companies faced, or are facing, legal accountability. Big Tech should not be exempt from consequences.

One expert who is well-acquainted with Facebook’s pattern of quashing research that tarnishes its image is ADL Belfer Fellow Laura Edelson, a computer scientist at New York University. On August 3, 2021, Facebook cut off her access to Facebook because she was studying how misinformation spreads through political ads on the platform. Ms. Edelson will soon testify before the House Science, Space and Tech committee for the hearing, “The Disinformation Black Box: Researching Social Media Data."

The timing of this hearing could not come at a more pivotal moment.

The debate over what content (and whose voices) should stay online and what should be removed, labeled, or de-amplified is not simple and will not be moved forward effectively by overly simplistic remedies that can have unintended consequences for free speech. Nor will Congress’s current cynical propensity to weaponize proposed reforms for partisan warfare advance any worthwhile goals. But careful legislation, regulation, third-party research, and market pressure are necessary if we are to meaningfully diminish the prevalence and impact of hate and disinformation online. 

One promising avenue for requiring more transparency from platforms is being sought in the California State legislature, where lawmakers will consider and should pass AB 587. This social media transparency bill would require regular reporting from major social media platforms on their terms of service, enforcement, and content moderation practices.

We need better clarity into how Facebook and other companies administer their content moderation policies. Consider the programs exposed by the WSJ series, such as Facebook’s “XCheck,” a shadow policy apparatus that allowed a “VIP” tier of politicians, celebrities, athletes, and other high-profile people to get away with violating the platform’s rules. Laws like AB 587 would apply penalties against platforms for spreading egregiously harmful content.

ADL experts have studied and documented the impact of online hate and abuse, particularly on Facebook. Last year, in a coalition, we launched the Stop Hate For Profit campaign in which advertisers, celebrities, and sports figures joined together to demand that social Facebook be held accountable for its role in amplifying hate and, in particular, racism and antisemitism. ADL has also reported how Facebook’s transparency reports obscure the full impact of hate speech and are shielded from independent third-party verification. We testified before Congress on the dangers posed by social media companies and we crafted our REPAIR Plan to counteract them. Social media platforms act as megaphones for domestic terrorists looking to broadcast vile messages of white supremacy and antisemitism far and wide to radicalize people and normalize extremism.

Our 2021 Online Hate and Harassment survey found that 75% of a nationally representative sample of respondents who had experienced online harassment reported that at least some of that abuse occurred on Facebook. Both our Holocaust denial and antisemitism report cards disclosed the platform’s anemic, often negligent, response to user-submitted reports of hateful anti-Jewish posts that violated Facebook’s and other platforms’ terms of use and community standards. These dismaying results came after nearly a decade of ADL advocacy to treat Holocaust denial as hate speech. While Facebook finally reversed its longstanding countenance of Holocaust denial last year, we continued to easily find violative content on the site.

The avalanche of information we now have on the harm caused by Facebook content and its refusal or inability to effectively and equitably enforce its own content rules must lead to a serious reckoning. But if years of research and blockbuster findings from trusted institutions and hard-hitting journalism from respected media outlets have not yielded meaningful action, one reason may be found in the words of Adam Mosseri, the head of Instagram. Responding to criticism leveled at Facebook in the wake of the WSJ’s investigation, Mosseri said:

We know that more people die than would otherwise because of car accidents, but by and large, cars create way more value in the world than they destroy. And I think social media is similar.

In other words: back off because we still create more value than harm, even though we intentionally engineer our platforms and business models to seek out and amplify that harm. 

Mosseri’s analogy doesn’t work even on its own terms. The automotive industry is subject to significant regulation and independent review by outside watchdogs. This oversight has vastly increased the safety of cars and saved countless lives. (Automotive industry regulation was originally the subject of great resistance from the industry, which falsely claimed it would put them out of business.) As a result, cars, while still causing harm (and climate change), nevertheless have far more safeguards for drivers, passengers, and passers-by than does Facebook for harmful content. Car emission requirements, auto-lock brakes, seat belt and crash testing, speed limits, licenses for users, and transparency reports, are a few safety measures. If Facebook wants to compare itself to the auto industry, all the more reason we should start looking at some analogous public and private guardrails.

Every day, Facebook makes consequential decisions about how to deliver and amplify the most engaging content to keep users coming back, as well as how to apply its content moderation rules. These decisions affect billions of people, from teenage girls suffering from eating disorders to Muslim minorities who are the victims of genocidal campaigns waged on the platform. But Facebook is hardly alone in failing to enforce its rules. Twitter permits political officials what it calls a “public interest” exception, allowing them to engage in “foreign policy saber-rattling”—inciting or implying violence that would lead ordinary users to be banned, suspended, or at the least, have their content removed. YouTube failed last year, in May 2020, to remove a conspiracy theory video that became a major source of COVID-19 misinformation before it amassed over seven million views, an egregious instance of content seeded from fringe groups online like QAnon and then rapidly amplified on social media.

One particularly disturbing takeaway from the WSJ’s series is Facebook’s longstanding policy of acknowledging harm only when absolutely forced to do so by a veritable tsunami of evidence. When faced with government action, company officials including CEO Mark Zuckerberg finally apologize and commit under the harsh light of scrutiny to again do more and better. It’s abundantly clear by now, however, that this strategy is a deflection, one it appears Facebook is now abandoning in favor of insulating Mr. Zuckerberg from continuing scandals and going on the offensive to serve users positive stories about the platform.

No one said reigning in social media was going to be easy. But the harm caused of social media is simply too big for us to fail. We shouldn’t need additional proof that platform self-regulation to combat disinformation and hate online doesn’t work when it comes to Facebook and many other companies. If the WSJ’s comprehensive series isn’t the tipping point needed to reform the tech sector, what will be?