Related Resources
The attention given to online hate and harassment tends to focus on direct, explicit attacks against targets. But another aspect of online hate gets less consideration: How hate purveyors praise and amplify each other’s hateful content within and across different platforms.
An investigation from the ADL Center for Technology & Society (CTS) has found antisemitic comments spiked on some YouTube videos within 48 hours of their links being shared on a fringe platform. This spawned what we are calling “hate parties”: comments and replies that promoted antisemitic tropes, conspiracy theories, and hateful worldviews that, as ADL has documented, can inspire offline incidents. While online hate is often understood to be motivated by a desire to antagonize or disrupt specific targets, hate parties are focused less on short-term, antagonistic goals and more on spreading and promoting hate on mainstream platforms that can reach a relatively large audience, and on connecting with others who share similar perspectives.
YouTube has a strict policy against hate speech—including comments left on videos—that it enforces through content removal, demonetization, or suspension. But YouTube links shared to fringe social media platforms whose content moderation rules are lax or nonexistent drive traffic back to YouTube, where antisemitism that should violate the platform’s rules thrives in video comment sections. Online hate purveyors maintain presences in multiple social spaces and often fall into call-and-response patterns between professional influencers and rank-and-file followers that extend hateful conversations across platforms. Sharing video links keeps the fringes tethered to YouTube and enables hateful conversations to extend across platforms.
CTS collected comment data from YouTube videos whose links were posted to different fringe platforms such as 4chan, Gab, Gettr, Telegram, and Truth Social (see Methods). We observed an average 17.98% increase in antisemitic comments within 48 hours of a video’s link being shared to a fringe platform. Such comments included Great Replacement theory talking points and implicit Holocaust denial. While it is possible the users leaving antisemitic comments are not the same as those who followed the link from a fringe site, the spike patterns show a significant correlation between link-sharing and increased antisemitism on YouTube.
CTS has previously written about the phenomenon of “stochastic harassment” and hate on Twitter, or how influential accounts act as nodes for hate and harassment even when not explicitly directing their followers to take any action. All social media platforms are vulnerable to stochastic hate and harassment, and none currently have effective policies for proactively addressing how hate and harassment are networked. It is unclear why YouTube is not fully enforcing its hate speech policy against the comments we observed attached to the videos for this study, but this is one more example of why current content moderation approaches are inadequate to fully address how hate and harassment work on social media.
Combating hate parties and their associated harms will require YouTube to alter its approach to content moderation. To effectively enforce its rules against hate speech, YouTube would need to account for how its services are embedded in broader online hate conversations unfolding across different platforms. Such an approach would need to include monitoring where and when inbound traffic originating from fringe platforms leads to increased rates of hate speech on known problematic channels; and proactively quarantining or disabling comment sections on certain videos.
How YouTube Comment Sections Turn Into Hate Parties
Far-right conspiracy theorist Mark Dice runs a popular YouTube channel with over 1.86 million subscribers. On January 2, 2023, Dice uploaded a video titled “Hollywood’s Problem with White People,” in which he criticized celebrities and social media users for discussing racial inequality in Hollywood. Toward the end of the video, Dice stated, “None of [these celebrities] would ever commit the cardinal sin of the entertainment industry by asking about another certain group or their influence; everyone knows that would kill their career overnight,” a clear antisemitic dog whistle to conspiracy theories about Jewish control of entertainment media. As of August 1, 2023, the video had received almost 200,000 views, more than 32,000 likes, and over 7,000 comments, 3% of which scored high for antisemitism.
On the same day that he uploaded the video to his YouTube channel, links to Dice’s video were shared six times on Gab, nineteen times on Gettr, once on 4chan, and twice on Truth Social, among others. Antisemitic comments flooded in almost immediately on YouTube, as the chart above shows.
The comments quickly snowballed into a hate-laced discussion against Jews, with numerous commenters piling on antisemitic innuendo. Several picked up on Dice’s dog whistle and used coded language for Jews, such as “juice,” “J’s,” and “j.w,”1 likely to avoid automated detection. Some called attention to the veiled reference, e.g., “We ALL know the certain group Mark is referring to here,” while others criticized Dice for not being explicit enough— “Mark you’re almost there just name (((them))),” and “(((HOLLYWOOD))) Name them Mark or dont even bother”—using the triple parentheses “echo” that antisemites use to indicate someone or something is Jewish and urging him to “name the Jew,” a common antisemitic slogan.
YouTube commenters’ demands for Dice to “name the Jew” mirrored conversations happening on fringe platforms where the video’s link had been shared. One Gab user posted a screenshot of an antisemitic thread in the video’s comment section as evidence of Dice’s impact on YouTube, while noting that he stopped short of naming the Jew:
The comment section on Dice’s YouTube channel was not meaningfully different from explicitly hateful threads on fringe platforms, such as similar rhetoric on 4chan’s Politically Incorrect, or /pol/, board. In a thread linking to the video, anonymous users debated if Dice adequately addressed “the JQ” (or “Jewish question”) in his videos, something that is often seen as a litmus test for antisemites. Though they disagreed with his tactics, two users acknowledged that Dice was an important voice on YouTube and sympathetic to their worldview. “He’s just another person helping people along the path,” one wrote, while another added, “Honestly im very surprised mark hasn’t been deplatformed truly.” Statements like these illustrate how antisemitic users on fringe platforms view YouTube channels like Dice’s as places to gather for hate parties, reinforcing and uplifting their hateful worldviews.
Leather Apron Club is another YouTube channel where comment sections turn into antisemitic hate parties. The channel’s operator is not a well-known media influencer; CTS found he sometimes goes by “Alex” when being interviewed by other far-right figures—including antisemitic YouTuber Keith Woods and Australian neo-Nazi Joel Davis—but has no way of verifying if that’s his real name or any additional biographical information about him. Even so, Leather Apron Club has been able to amass more than 60,000 subscribers and over 2.3 million total views since its launch in 2021. The content Leather Apron Club posts is more explicitly antisemitic than Dice’s, including a video uploaded on October 24, 2022, titled “High Jewish IQ Debunked - Joe Rogan Followup.” As of August 1, 2023, the video had more than 227,000 views, over 15,000 likes, and over 5,000 comments, 12% of which scored high for antisemitism.
In the video, the host reviewed academic publications about IQ scores and race, argued that Jews were overrepresented in media, and insinuated that Jews were an outsider group that had usurped political power in the United States and did not have the best interests of Americans in mind. YouTube’s hate speech policy prohibits “conspiracy theories saying individuals or groups are evil, corrupt, or malicious” based on race, ethnicity, or religion, which could be seen as applying to Leather Apron Club’s video. CTS reported the video and channel, but YouTube determined neither violated its hate speech policy.
We found links to the “High Jewish IQ Debunked” video shared extensively on three fringe platforms: Telegram (38 times), Gab (42 times), and 4chan (254 times). Antisemitic comments spiked several times on the video, most of which correlated with its link being shared to fringe platforms (see chart above).
One of the top comments, with over 2,000 likes, made it clear that the host’s antisemitic message was being received by his audience: “This is naming and noticing at an astonishing level. This guy needs a Nobel Prize for Noticing” (antisemites commonly use the hashtag “#TheNoticing” to signal to one another online that they are “noticing” alleged Jewish control). As with Dice’s video, many commenters used coded language and implicit references to communicate antisemitic sentiments, such as: “Synagogue of satan,” a reference to lines in Revelation 2:9 and 3:9 in the Christian Bible that antisemites often quote to one another; “SHUT IT DOWN the goyim know,” a common phrase that communicates beliefs about a secretive Jewish cabal controlling power from behind the scenes; and “[Jordan] Peterson is a j3w shill,” using the number “3” in place of “e” to avoid detection.
The antisemitic comments on Leather Apron Club’s video and those in fringe platform threads discussing his video were so closely intertwined that they appeared almost like parts of one cohesive, shared conversation: a hate party moving locations around social media. For example, YouTube commenters referenced retired professor and antisemite Kevin MacDonald and white supremacist Jared Taylor as writers who have made similar claims about IQ and Jews in their work. A Telegram user who linked to Leather Apron Club’s video wrote, “Jewish high IQ is an objective fact … That alone explains a huge (though not all) amount of Jewish power and influence … [All this] is supported by other figures such as Kevin MacDonald,” while on Gab a user replied to a post linking to the video, “Yeah, about that IQ. Even people like @JaredTaylor repeat that stuff but it’s complete bullshit.”
Time to End the Party
While YouTube has a mixed record on combating the spread of antisemitism on its platform broadly, this study shows that it is falling short on moderating comments. Many of the comments CTS analyzed appear to fall squarely under YouTube’s definition of hate speech: “Content promoting … hatred against individuals or groups based on [protected] attributes.” Interestingly, YouTube does have a policy that prohibits channels from including links in their video descriptions to sites with content that would violate YouTube’s own hate and harassment policies. But there is no policy regarding how to treat videos shared to those same sites, even when that leads to inbound traffic that amplifies hate on YouTube.
To moderate hate speech on its platform, YouTube should proactively monitor how link referrals affect inbound web traffic to channels that foster hate parties. Researchers have proposed an automated monitoring tool that predicts the likelihood of a video being targeted by a hate raid, i.e., coordinated efforts to attack and disrupt a target, allowing YouTube to work to disable or quarantine comments proactively. A similar solution could work for proactively identifying and quarantining videos likely to attract hate parties by analyzing past comments and inbound web traffic for known problematic channels like Mark Dice’s or Leather Apron Club.
Commenter activity is often outside of a channel’s control—for example, those targeted by hate raids—and channel operators should not bear the sole responsibility for combating hateful comments. Yet some channels, like the ones in our study, implicitly welcome hateful engagement and make space for hate in their comment sections. Leather Apron Club’s operator has explained in interviews how he strategically avoids posting content that will trigger a strike from YouTube but will nevertheless get his antisemitic messaging across to his audience effectively. In other words, YouTube’s content moderation rules are often ineffective for channels like Leather Apron Club because the words they use are less important than the service they provide for spreading hate. Those channels’ goals are making space for hate and strengthening social ties among hate purveyors, not making explicit threats or directly calling for violence.
ADL has long advocated for larger structural changes to the incentives of the social media ecosystem, such as reforming the surveillance advertising business model, that would be required to truly protect users from online hate and harassment. But YouTube could act now to enforce its rules against these hate parties by enacting the above-proposed solutions, all of which are both technically feasible and would create a safer, less hate-filled experience for users of the platform.
Methods
CTS used the Social Media Analysis Toolkit (SMAT)—an open-source platform for researching harmful online trends—to query posts or messages containing YouTube URLs. We restricted our search from December 24, 2022, to January 24, 2023, and collected data from the following fringe platforms: 4chan, 8kun, Bitchute, Gab, Gettr, LBRY, MeWe, Minds, Parler, Poal, Rumble, Telegram, Truth Social, Wimkin, and Win. We found over 385,000 messages with at least one YouTube link. We narrowed our focus to analyze only videos whose links were shared to at least three fringe platforms, resulting in 1862 unique videos. 1389 of those videos were still on YouTube after we completed the initial data analysis, of which 1205 had at least one comment (the other 184 may have disabled comments or did not attract much engagement). We did not selectively sample videos or channels based on political affiliation or ideology.
We used the YouTube data API to collect all of the comments for the sample of 1205 videos. We then used ADL’s Online Hate Index (OHI), a machine learning classifier trained to identify antisemitic content. We analyzed the comments and assigned them a score indicating the likelihood they were antisemitic. The highest possible score on the OHI is 1.0, indicating a high level of confidence that the content is antisemitic. We included all comments that received a score of 0.9 or higher in our analysis. By applying this high threshold, we likely underestimated the prevalence of antisemitism in the comments.
To determine whether link-sharing had any possible effect on a spike in antisemitic comments on a YouTube video, we considered the following criteria:
-
Comments had to be posted within 48 hours of the link being shared to a fringe platform.
-
There must have been a minimum of two antisemitic comments posted during the 48-hour period.
-
Spikes had to align with peaks detected using the signal module in the SciPy Python library, as detailed here.
526 videos met these criteria. We then tallied the number of antisemitic comments for each of the 526 videos left between December 24, 2022, and January 24, 2023, and identified 22 videos with more than 50 antisemitic comments and manually reviewed them. Using SMAT, we searched for every instance of a link to those 22 videos being shared to fringe platforms and analyzed the comments users made on posts that shared the link. This allowed us to compare the details of conversations on fringe platforms with comments on YouTube videos.