Article

Facing the Music: Spotify's Rules Sound Cloudy

Facing the Music

February 02, 2022

Spotify is in hot water for its support of controversial podcaster Joe Rogan. In January 2022, a group of 270 medical professionals wrote an open letter to the company accusing Rogan of spreading misinformation about the Covid-19 pandemic, calling him “a menace to public health.” Soon after, legendary rocker Neil Young demanded that the company either remove his music or stop airing the podcast because it promotes false information about Covid-19 vaccines. The company chose Rogan—whose podcast attracts an estimated 11 million listeners per episode and commands a minimum ad spend of $1 million—prompting a wave of backlash from other artists threatening to walk away from the platform. 

Responding to the public outcry, Spotify co-founder and CEO Daniel Ek posted a statement saying the platform would affix a warning label on any podcast episode that discusses Covid-19. Most notably, for the first time, Spotify published its Platform Rules. Ek wrote, “We have had rules in place for many years but admittedly, we haven’t been transparent around the policies that guide our content more broadly.”

In other words, Spotify’s 381 million monthly active users and its millions of artists and content creators followed policies most people had never seen before, enforced by a content moderation team of unknown size. That is, assuming the team comprises humans. Tech companies increasingly rely on automated systems to flag violative content, but reports show such moderation is minimally successful.

Spotify publishing its Platform Rules is a move towards greater transparency around how it moderates content. However, ADL’s analysis of its policies concludes that serious questions remain as to how the company will fight misinformation and hate speech and what resources it devotes to content moderation. Spotify, unlike Facebook and Twitter, does not post transparency reports, so we have no idea what kind of content is taken down, why, or how often. ADL is broadly critical of social media platforms’ transparency reporting, but it is surprising that a platform of Spotify’s size (it has more monthly active users than Twitter) offers nothing.

We analyzed Spotify’s policies. We found that:

1. Spotify’s policies on deceptive election-related content don’t specify who they are intended for and whether they meaningfully combat ongoing election misinformation.

Spotify prohibits deceptive content, stating:
 

Facing the Music


Social media platforms played a substantial role in spreading misinformation about electoral fraud during the 2020 presidential elections. A recent ADL report found that baseless allegations about electoral fraud still persist more than a year later. Given the vague wording of what qualifies as violative, and ambiguity as to other content that could be included under this policy, it’s unclear whether Spotify’s policies prohibit such content. Although there won’t be another presidential election until 2024, conspiracy theories erode people’s trust in the democratic process without preventing them from voting.

2. Spotify is evasive about how it moderates thousands of pieces of audio content and what kinds of misinformation aside from health its policy covers.
 

Facing the Music
Facing the Music

 

It is unlikely that Spotify hired enough human moderators to listen to thousands of episodes, usually at least 30 minutes long. More plausible is that the company used automation to scale its content moderation efforts without incurring the expense of a massive number of human moderators. ADL has noted the problems associated with over-relying on AI for content moderation. In particular, audio content moderation comes laden with challenges.

Tools for audio content moderation are less robust than those for text. Audio content is typically transcribed into text via automated transcription software, and then the transcribed text is run through text-based automated tools. Anyone familiar with automated transcription software knows speech is sometimes turned into mangled, confusing text that is difficult for people to understand, let alone a machine. Machines struggle to recognize nuance, cultural differences, or emotions that can alter the meaning of speech. These challenges are exacerbated when processing music lyrics. Lyrics are structured differently from most spoken words with looser grammatical rules and interact complexly with musical accompaniments.

Additionally, do Spotify's new policies pertain only to medical and election-related misinformation? What about other kinds of misinformation?

For example, the controversial clinical psychologist Jordan Peterson recently appeared on “The Joe Rogan Experience” and was interviewed about climate change, among other topics. Peterson asked, “There’s no such thing as climate, right?” The episode drew heavy criticism from scientists as anti-science and promoting climate denialism. Should the episode fall under Spotify’s broader misinformation policy? The company may claim its election tampering clause addresses political misinformation, but none of its policies specifically addresses false scientific claims.

3.Spotify’s policies on violent extremists are not well-developed and leave out informal extremist networks and individuals.
 

Facing the Music


Has Spotify consulted outside experts to define and identify violent extremist organizations? If so, who are these experts? What extremist groups are included? Furthermore, not all extremists are part of organizations but members of decentralized hate groups and networks.
 

Facing the Music


However, its policy on protected groups stands out for recognizing power inequalities between dominant and marginalized groups. ADL commends this policy.

4. Although most of Spotify’s consumers use the service on mobile devices, the platform does not enable users to file reports of violative content through their phones or tablets. Users cannot access its Platform Rules through their mobile devices, either.
 

Facing the Music


As of January 2015, the majority of Spotify’s users listen to the platform on their mobile devices. Assuming this is still the case, why won’t Spotify publish its policies on its apps and its website? Users should easily access the platform’s guidelines without toggling between different applications.

Notably, CEO Daniel Ek wrote in his statement that Spotify’s Platform Rules will be posted permanently on the main Spotify website and will be translated into “various languages.”

Spotify is available in more than 180 countries around the world. Will Spotify post these policies in every language the platform supports? This raises additional questions regarding content moderation for non-English content. Does the platform have enough moderators that speak the languages covered by the platform's content? Do moderators live in the countries of the languages covered by Spotify? For example, do Spanish content moderators live in a Latin American country, Spain, or both? Are they knowledgeable of the local dialect, context, and history that can inform the meaning and significance of content?

5. These policies tell us nothing about how to flag abuse, rules enforcement, or transparency reporting.
 

Facing the Music


Clear, appropriate policies are not worth much without consistent and transparent enforcement. ADL would like Spotify to share its policies and practices for reviewing and determining violative content. How often can users violate the platform’s policies before their accounts warrant suspension? What kinds of violations result in suspension or account deletion? Is the violation per medium (say, a podcast episode) or per instance (repeating misinformation multiple times during an episode)? Will Spotify’s recommendation algorithm de-amplify violating accounts? How much content is removed proactively versus reactively?

Facebook and Twitter had policies that enabled political leaders and high-profile individuals to break their rules without penalty. Does Spotify afford notable figures similar protection?

ADL advocates for regular, mandated transparency reporting, data access for researchers, and independent auditing. Does Spotify plan to publish transparency reports, so the public better understands how the company takes action against violative content and its content moderation practices?

Transparency reports are supposed to track platforms’ enforcement of their content moderation rules. Ideally, such reporting compels platforms to be explicit about their policies on hate, harassment, and misinformation and apply their rules consistently.
 

Facing the Music


Which products or features will subject users to “additional requirements”? Where can users find information about them?
 

Facing the Music


Why is the reporting form only on its desktop app? Again, most of Spotify’s consumers use the platform through mobile devices, so it would be logical to create reporting forms for phones and tablets—ideally through an abuse reporting portal, as ADL advocates.

Lastly, how effective is Spotify’s reporting process for users who have experienced hate and harassment or been exposed to misinformation? Who is affected in terms of demographics or identity—e.g., gender, race/ethnicity, or sexuality? What resources are there for targets? Beyond its Wellness Hub devoted to mental health, we looked for, but could not find, support for people affected by harassment or misinformation on the platform.

Conclusion

Spotify was founded in 2006. Nearly 16 years later, with its hand forced by public outrage and negative press, the company finally published its content guidelines. Ultimately, however, these guidelines raise more questions than provide answers. Without more clearly defined and comprehensive misinformation and hate speech policies, robust transparency reporting, and coordinated assistance to targets, Spotify’s Platform Rules are not a reassuring sign it’s serving its consumers or content creators well. Spotify paid Rogan $100 million for the exclusive rights to his show. The platform is not merely a host for content anymore but a publisher that acquires content it knows to be divisive or misleading. As ADL has argued repeatedly, platforms must take seriously their responsibility to stop profiting from hate, disinformation, and extremism.