Article

Six Pressing Questions We Must Ask About Generative AI

Chat GPT explaining January 6, 2021 normally and in a "Q-Anon" style

From Guy Marcus' Substack article, "Inside the Heart of ChatGPT's Darkness: Nightmare on LLM Street." Original caption: Elicited by S. Oakley, Feb 2023, ; full prompt not shown; final paragraph is the QAnon-style (versus typical ChatGPT-style) description of the January 6 riots

White House

Fight Antisemitism

Explore resources and ADL's impact on the National Strategy to Counter Antisemitism.

While the dangers of online hate have affected society since the early days of dial-up internet,  the past 25 years have demonstrated that waiting until after harms occur to implement safeguards fails to protect users. The emergence of Generative Artificial Intelligence (GAI) lends an unprecedented urgency to these concerns as this technology outpaces what regulations we have in place to keep the internet safe. GAI is gaining widespread adoption; thus, we must heed the lessons learned regarding internet governance–or lack thereof–and implement proactive measures to address its potential adverse effects. 

GAI refers to a subset of artificial intelligence systems that can produce new or original content. GAI uses machine learning and neural networks to create audio, code, images, text, simulations, videos, and other information, resembling human-like creativity and decision-making. Although numerous promising prospects exist for using GAI in scientific, medical, artistic, and linguistic domains, it could intensify the spread and prevalence of online hate, harassment, and extremism.

ADL has been at the forefront of fighting hate, harassment, and antisemitism online for decades. We know that the rapid integration of new technologies without appropriate safeguards poses significant threats to our society, and the embrace of GAI is no exception. Civil society organizations like ADL are hardly alone in sharing their concerns about the impact of GAI. We conducted a survey that found 84% of Americans are worried GAI tools will increase the spread of false or misleading information, and 87% want to see action from Congress mandating transparency and data privacy for GAI tools. 

Given the potential for GAI to intensify existing issues of hate and harassment, ADL remains committed to using our expertise to ensure that GAI is developed and implemented responsibly. Here are six questions we urge policymakers and industry professionals to consider regarding GAI’s applications and safeguards.

1. How can we prevent GAI from being weaponized in sowing disinformation and harassment?

The rise of GAI adds pressure to the ongoing challenge social media users face to identify and separate disinformation. These tools make it easy, fast, and accessible for bad actors to produce convincing fake news, create visually compelling deepfakes, and quickly spread hate and harassment. Perpetrators can distribute harmful content in a matter of seconds. 

Deepfakes and other synthetic media have been a cause for concern for many years. For example, in 2017, machine learning researchers trained a neural network on hours of Barack Obama audio and built a tool that would mimic his voice and facial movements in video. At the time, critics worried that bad faith actors could use this technology for deception, further complicating efforts to combat mis- and disinformation. 

The rapid development and widespread accessibility of GAI and synthetic media tools could have significant implications. For example, it is not difficult to envision the potential for bad actors to leverage synthetic media to disseminate various forms of election-related disinformation using antisemitic tropes. Moreover, the algorithmic amplification of inflammatory content would further obscure the truth, making it harder for users to access accurate information. 

The potential for GAI-generated content to be used for harassment and the invasion of privacy has already been demonstrated, as seen in the 2023 case involving the creation of synthetic sexual videos of women Twitch streamers without their consent. AI-generated nonconsensual pornography may cause significant harm that cannot be undone by verification and correction after the fact. The findings from ADL’s ethnographic research and annual surveys on online hate and harassment demonstrate the damaging effects of online hate and harassment, including psychological and emotional distress for targets, reduced safety and security, and potential professional and economic consequences.

Given these repercussions, what product and policy-focused solutions can companies implement to mitigate potential risks associated with bad actors weaponizing generative AI? Social media platforms must find ways to create more robust content moderation systems that will withstand the potential deluge of content that perpetrators can generate using these tools. For example, social media platforms should explore effective ways to leverage GAI's capabilities–while acknowledging the limitations, pitfalls and dangers of GAI–to actively safeguard against misuse by bad actors, especially since there is a growing understanding that AI tools can detect GAI outputs. These companies should also implement strong policies related to harmful GAI; make abuse reporting mechanisms responsive, effective, and easily accessible; and consider transparency efforts like labeling non-violative but misleading GAI-generated content. 

On the other hand, lawmakers and GAI companies may need to contemplate limiting its use for creating specific kinds of synthetic media. These restrictions could focus on establishing clear intent standards, disclosure requirements, and venue limitations. 

2. How can we safeguard against the possibility of GAI systems producing original, convincing, and potentially radicalizing hateful content?

While much has been made of the ability of bad actors to weaponize GAI to generate harmful content, it is also important to consider how these systems can produce original hateful or extremist content that can influence or radicalize users. GAI's capacity for producing personalized and targeted content makes the technology ideal for generating powerful content related to extremist ideologies, antisemitism, and hate.

For example, shortly after the ChatGPT prototype’s release, enterprising users figured out how to circumvent its safeguards and prompt it to answer as an alter ego named “DAN,” or “do anything now,” that will endorse racism and conspiracy theories.  Redditors have continued experimenting with and refining DAN as OpenAI, the AI lab that developed ChatGPT, updates its tool in response to manipulation. 

To illustrate this issue, ADL investigators tasked ChatGPT with writing an article based on a typical Holocaust denial claim that challenges the accuracy of the number of Jews killed during the Holocaust, citing the logistical challenges of murdering six million people.

Example of ADL Investigators prompting ChatGPT to engage in Holocaust Denial
 

When we tried this prompt on January 30, 2023, it drafted an article explaining how murdering six million Jews might be difficult during World War II, essentially accepting the antisemitic conspiracy theory. When we tried again on February 8, 2023, ChatGPT refused to comply due to OpenAI’s policy against Holocaust denial. On our third attempt on May 9, however, ChatGPT produced an article similar to the one it wrote in January. 

One could easily see how prompting content like this could result in GAI outputs that spew antisemitism or espouse novel conspiracy theories. The prevalence of online hate today is undeniable, and, as shown by ADL research, hate is amplified and normalized by algorithms designed to optimize for engagement. However, using GAI-powered systems introduces a new level of concern, as these technologies can tailor messages to specific audiences with unprecedented precision. By taking into account individual preferences and biases, GAI-generated content can be even more effective in persuading and radicalizing individuals susceptible to hateful or extremist ideologies.

3. What data accessibility and transparency standards should be established in using GAI?

Transparency is crucial in the technology industry to allow users and other parties, such as researchers, academics, civil society organizations, and policymakers, to comprehensively understand a social media company's product, policy, data management, and other functions. ADL has long advocated for increasing tech company transparency because doing so is foundational to increasing industry accountability, building trust, and ensuring that technology is used legally and ethically. Without transparency, the public lacks adequate information about what user data is collected, how content recommendations and ranking decisions are made, and what policies are enforced. ADL has strongly supported transparency legislation in California and proposed legislation in New York requiring social media platforms to publicly disclose their corporate policies and key data metrics regarding online hate/racism, disinformation, extremism, harassment, and foreign political interference. 

Still, no federal law mandates transparency, and, at present, most voluntary transparency reporting is woefully inadequate. As GAI is incorporated into various platforms and products, it is essential for the tech industry to provide and for lawmakers to require transparency from companies using GAI. GAI transparency requires stronger mechanisms than those currently in place because the lack of transparency in GAI systems may result in bias, discrimination, and misinformation that are difficult to detect due to the convincing nature of content created by GAI. As noted above, artificial intelligence systems are increasingly integrated into critical infrastructure, such as healthcare, government benefits, hiring, transportation, and criminal justice. If these systems are not transparent and trustworthy, severe consequences could disproportionately impact marginalized communities. 

There are a variety of transparency mechanisms, including authentication, audits, impact assessments, data access, and disclosures. Lawmakers and industry partners should consider the types of information GAI developers should provide to different audiences. For instance, academic researchers and civil society organizations are interested in accessing the data used to train GAI models to understand what biases might exist and advocate for vulnerable targets. On the other hand, end users may want information on the decisions made by GAI systems to deliver specific content to individuals. And developers building new GAI tools based on existing datasets would benefit from “data nutrition labels,” a proposal from the AI ethics research community to assess the possible biases embedded in training data. 

It is also important to enhance the public's awareness of the intricacies of GAI and the various factors to consider when engaging with GAI tools. To achieve this, policymakers should consider launching extensive educational campaigns and programs to facilitate a broader comprehension of GAI and its associated hazards. These measures will empower users to make well-informed choices concerning GAI applications.

4. What accountability measures should be in place when GAI causes or furthers harm?

GAI technology is rapidly advancing, and as it continues to blur the lines between content creators and users, it raises several questions about legal accountability that will require attention in the coming months and years. One significant concern is the possibility of GAI systems violating a user's privacy. In this case, accountability for the violation also requires clarification. Defamation is another significant issue that lawmakers and courts need to address, given the potential for GAI-generated content to harm an individual's reputation.

GAI's ability to generate large volumes of content with speed and precision also creates an opportunity for bad actors to potentially engage in unlawful cyberstalking, doxing, and harassment at an unprecedented scale, resulting in severe consequences for targets. Finally, GAI’s ability to create synthetic media, like deepfakes, raises questions and concerns about using GAI to disseminate intimate imagery. 

There is a need to establish a strong legal framework to address these issues. While Section 230 of the Communications Decency Act is an essential aspect of the legal framework for tech platforms, lawmakers and the courts must determine how it applies to GAI. ADL has long called for the interpretation of Section 230 to be updated to fit the reality of today’s internet. The law, as currently applied by the lower courts, provides sweeping immunity for tech platforms, even when product features and other conduct contribute to unlawful behavior—including violence. Regarding GAI, however, lawmakers have already stated that Section 230’s liability shield would not apply to generative AI systems like ChatGPT or GPT4. Similar sentiments were communicated during oral arguments in the Supreme Court case challenging the interpretation and application of Section 230, Gonzalez v. Google. These arguments imply that companies using such AI models could be legally accountable for the harmful content their GAI-powered systems generate. Ultimately, there must be some legal accountability for unlawful harms caused by GAI.

5. How can we ensure all companies and organizations incorporate trust and safety practices when using GAI?

Generative AI is becoming increasingly prevalent among different industries, including healthcare, education, fashion, architecture, entertainment, traditional news media, and local government. These strides may be advantageous from a productivity and project management standpoint; however, many organizations incorporating GAI into their daily operations are unfamiliar with content management practices, trust and safety standards, data privacy, and other safeguards commonly implemented by tech companies when using AI and machine learning systems. Because non-tech companies may have limited prior experience working with AI, they need structures like trust and safety teams or dedicated policy guidelines around important topics like applying an anti-hate or anti-bias standard when using or labeling datasets. With public-facing chatbots being implemented across industries, companies employing GAI must proactively mitigate the harm that may arise from using and misusing their new GAI-facilitated resources. 

For example, within a few minutes, our investigators could use the Expedia app, with its new ChatGPT functionality, to create an itinerary of famous pogroms in Europe and nearby art supply stores where one could purchase spray paint, ostensibly to engage in vandalism. 

An itinerary ChatGPT drafted to tour famous pogroms in Europe
 

As GAI gains popularity across industries with little experience addressing bias, hate, and harassment, companies and developers will need to collaborate with policymakers, creators, regulators, civil society organizations, and affected communities to develop best practices for regulating GAI; implement anti-hate and harassment policies; support users reporting abuse; protect user privacy; and mandate transparency.

6. How can we use GAI to improve content moderation and decrease harassment?

While social media platforms have long employed AI for content moderation, the tech industry has noted that prior iterations of AI-focused systems have fallen short because of their scale limitations and challenges in understanding context. As we learn more about GAI’s ability to glean context more efficiently than its predecessors and detect GAI-generated content at scale, platforms should consider using it to improve content moderation systems.

For example, recent research has shown that ChatGPT can identify hateful, offensive, and toxic content in training data 80% as accurately as human annotators. In light of this, it is worth considering whether GAI tools can be leveraged to better prevent hate and harassment. 

Fine-tuned GAI content moderation tools may also have the ability to identify content violations that use coded language to skirt enforcement, potentially addressing issues of scale (by vastly decreasing the time of moderation review), and increasing access to moderation for non-English languages. 

Another exciting (albeit hypothetical) possibility is that a GAI tool could be created and trained on a vast corpus of antisemitic images, tropes, and language that would allow human moderators easy access via prompts to develop a deeper understanding of the antisemitic content they are regularly asked to review. 

Nevertheless, social media companies must proceed with caution. As platforms increasingly rely on GAI to expedite content moderation, it is critical to recognize GAI's limitations. Even in the research cited above, ChatGPT had more difficulty identifying hate than other types of content. GAI will likely be a powerful addition to the content moderation toolkit, but platforms should approach it as a complement to current AI and human moderation teams, not a replacement. 

The promise and danger of GAI

Many exciting advancements can and will be made possible with the increased accessibility of GAI, but this technology may further accelerate hate, harassment, and extremism online. Therefore, as lawmakers and industry professionals prioritize fostering and supporting innovation, they must also address these challenges to prevent the misuse of GAI and mitigate its potential negative impact. We look forward to working with policymakers, industry leaders, and researchers as they establish standards and guidelines for the development and use of GAI.