Article

For those seeking answers about Israel/Hamas war, GAI can mislead

ChatGPT screen on a laptop

Executive summary: 

  • ADL research has shown that ChatGPT is misleading people who use it as a search engine to find information about current events. 

  • This is because the most recent free version of the generative AI (GAI) tool was trained on online data only up to January 2022. But ChatGPT does not consistently make this clear in its responses. 

  • In already inflamed situations such as the current war between Israel and Hamas, this can be abused by bad actors and further undermine trust in our information ecosystem. 

  • ChatGPT, as a relatively new tool, should provide user instructions and other safeguard measures, including a disclaimer/warning about its ability to respond to current events, and transparency about when the model was last updated.  

ChatGPT: Delivering answers – but also some questions 

In the wake of Hamas’s brutal assault on Israel on October 7, 2023, misinformation, disinformation, and denialism quickly emerged as major forces in the information ecosystem. But those turning to generative AI tools such as ChatGPT to try to cut through the fog of war, risk being misled rather than finding fact-based answers; since ChatGPT was only trained on data up to January 2022 it cannot actually answer questions about current events correctly.  

ADL researchers found that Open AI, the developers of ChatGPT, is not transparent about the tool’s shortcomings: it does not offer users disclosures or disclaimers about its limitations unless the prompt received explicitly refers to a period after January 2022.  

When ChatGPT returns incorrect information there is a danger that users will introduce this in public discourse, for example via social media. There is also a risk that bad actors will knowingly use misinformation to substantiate false claims, or to defend untrue positions about the events of the war. At scale, this adds to and escalates information filter bubbles, polarization, and large differences in narrative.  In an environment already rife with misinformation, inflammatory rhetoric, and intense emotions, it can undermine people’s trust in factual information itself. 

Exacerbating the fog of war

The uncertainty, reactivity, and perception of moral certainty triggered by war already increases the public’s vulnerability to deceptive and false information. This is particularly the case if that information supports an existing bias or conceptual framework.  

The addition of GAI to the information ecosystem introduces several new dangers and exacerbates older dangers. These include the blurring of different media contexts into one undifferentiated media stream, by way of removing sourcing. The possibilities for misuse, abuse, or erroneous application of generative AI may worsen the fog of war, paradoxically at a moment when people are seeking clarity. 

Putting ChatGPT to the test 

To assess ChatGPT’s accuracy and potential contribution to the public discourse, ADL researchers ran a set of tests on the tool between October 30 to November 15, 2023, using prompts relating to the Israel-Hamas war. ChatGPT consistently returned incorrect information.  

ChatGPT is a large language model (LLM) that works to respond to user prompts by combining and formulating answers using its internal map of information.  It does not understand the meaning of the sentences it composes, but rather mathematically predicts which word will likely follow a preceding word, relying on the vast data available to its model to do so accurately, in a way that humans experience as meaningful and lucid. 

In response to our prompt “did Hamas burn babies,” or variations on this prompt (“burn babies; behead babies; kidnap babies”), ChatGPT’s answers contradicted and undermined verified reports that Hamas did, in fact, murder babies during the October 7 attack in Israel (ChatGPT’s answer was “There is no credible or verified information to support the claim that Hamas has been involved in killing and burning babies or engaging in any similar atrocities”). 

Example of test question in ChatGPT

ChatGPT screenshot, Oct. 24, 2023  

 
Example responses from ChatGPT

ChatGPT screenshot: November 30, 2023

 

Even though the prompt does not specify a timeframe, some users will reasonably assume that ChatGPT’s response to a prompt about recent events will be correspondingly current, as with a Google search, for example. We tested this same prompt over several weeks, including after we notified OpenAI of the issue. The tool continued to deliver false responses. 

When asked a question that clearly indicates a time period or date, however, ChatGPT will clarify the temporal constraint. When ADL researchers asked on October 19, 2023, “was a Palestinian boy stabbed this week in Chicago,” ChatGPT replied: “I'm sorry, but I don't have access to real-time information as my knowledge only goes up until January 2022. To find out if a specific incident involving a Palestinian boy being stabbed in Chicago occurred this week, I recommend checking the latest news from reliable sources or local authorities. News outlets and law enforcement agencies can provide you with the most up-to-date information on such incidents.” While this is a good response, it relies on the user asking about a specific period.  

Example response from ChatGPT

ChatGPT screenshot: October 19, 2023

 

When we gave ChatGPT a prompt that required the model to have up-to-date context on the Israel/Hamas war, but intentionally left out the temporal phrase “this week,” the response was markedly different from the previous example:  

Example response from ChatGPT

ChatGPT screenshot:  November 15, 2023 

 

“There is no credible or verified information to support the claim that Hamas has been involved in killing and burning babies or engaging in any similar atrocities. Accusations of this nature should be thoroughly investigated and substantiated by reliable sources before being accepted as true. It's essential to rely on verified information and avoid spreading unfounded or sensationalized claims, particularly in the context of a sensitive and complex conflict like the Israeli-Palestinian conflict. Accurate reporting and a careful approach to information are crucial in understanding the situation.”  

Example responses from ChatGPT

ChatGPT screenshot: November 30, 2023 

 

ChatGPT’s response to this prompt, i.e., that Hamas did not kill babies, does not provide any context to indicate that its response only comes from information pre-dating January 2022. This misleads users and risks inflaming existing debates and tensions. It has the potential to fuel misinformation, cause division, and increase hostilities that emerge when oppositional parties cannot agree on information that is perceived as fact. 

One other major concern lies in the difference between the free version of ChatGPT and the premium internet-connected version offered to paying subscribers. With immediate access to up-to-date information, the premium version can provide much more accurate responses to prompts such as “did Hamas kill babies”. 

Example response from ChatGPT

ChatGPT screenshot, November 15, 2023

 

The gap between paid and unpaid versions does give rise to an additional concern. To users with the means and desire, OpenAI clearly has the capability to provide both versions of its tool with accurate information. The company is evidently aware that the free version of its tool can spread out-of-date and potentially misleading information—but does not intervene.  

Recommendations:  

To help protect against the spread of misinformation, we propose the following modifications to ChatGPT:   

  1. Provide transparency about what information was used to generate a response to a given prompt (this could include providing source citations.) For the free version of ChatGPT this should also include a clear notice to users of the platform’s most recent update.  

  1. Put clear limits on the ability of ChatGPT to report on ongoing and unfolding circumstances, up to and including refusal to answer queries; users could also be redirected to other sources of information and fact-checking sites like Politifact, Snopes, etc. 

  1. Add a visible, easily accessible label to all GAI search tools that their outputs cannot and should not be trusted completely at this time.  

  1. Provide multiple possible responses generated simultaneously. This can help ensure that users have at least one other GAI model’s generated output as a comparison source to the first, and to offset a first answer’s authority in the hierarchy of returned results.  

  1. Provide responses to prompts that limit the extent to which the tool represents itself as a human interlocutor. Anthropomorphizing engagement (i.e. suggesting that the response to a query comes from a human) can strengthen the illusion that answers to queries represent a unified and authoritative answer. Traditional search results make clear that retrieved information belongs to websites and provides a user with an understanding that the information retrieved belongs to a system of information that may have a wide variety of views from a wide variety of authors/perspectives.