Despite OpenAI’s recent update to its usage guidelines for ChatGPT, the AI-powered chatbot can still generate tailored political messages, a loophole that is raising eyebrows as the Republican primaries loom in the near future.
The Washington Post reports that OpenAI, the organization behind popular AI-powered chatbot ChatGPT, recently updated its guidelines to mitigate the risks of spreading tailored disinformation in political campaigns. However, more than two months after the update, the chatbot can still be used to generate tailored political messages, a glaring enforcement gap that is causing concern as the Republican primaries loom and global elections take center stage.
OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)
US President Joe Biden (R) and his son Hunter Biden walk to a vehicle after disembarking Air Force One upon arrival at Joint Base Andrews in Maryland on August 16, 2022, as they return from vacation in Kiawah Island, South Carolina. (Photo by Nicholas Kamm / AFP) (Photo by NICHOLAS KAMM/AFP via Getty Images)
Cat Zakrzewski writes for the Washington Post: “When OpenAI last year unleashed ChatGPT, it banned political campaigns from using the artificial intelligence-powered chatbot — a recognition of the potential election risks posed by the tool.” Initially, the ban was a safeguard against the potential weaponization of the chatbot in electoral politics.
The revised guidelines specifically prohibit the use of ChatGPT for creating materials aimed at specific voting demographics. “These rules ban political campaigns from using ChatGPT to create materials targeting specific voting demographics, a capability that could be abused to spread tailored disinformation at an unprecedented scale,” Zakrzewski adds. Despite these guidelines, the enforcement gap remains a significant concern.
The lack of effective enforcement of the updated guidelines has reignited debates about the ethical use of AI in political campaigns. Critics argue that the existing loopholes could allow for the subtle exploitation of the technology in ways that are not explicitly covered by OpenAI’s rules.
Breitbart News reported this month that a university study of ChatGPT demonstrates that the chatbot’s notorious leftist bias is as strong as ever:
Researchers asked ChatGPT to impersonate supporters of various political parties and positions, and then asked the modified chatbots a series of 60 ideological questions. The responses to these questions were then compared to ChatGPT’s default answers. This allowed the researchers to test whether ChatGPT’s default responses favor particular political stances. Conservatives have documented a clear bias in ChatGPT since the chatbot’s introduction to the general public.
To overcome difficulties caused by the inherent randomness of the “large language models” that power AI platforms such as ChatGPT, each question was asked 100 times and the different responses collected. These multiple responses were then put through a 1,000-repetition “bootstrap” (a method of re-sampling the original data) to further increase the reliability of the inferences drawn from the generated text.
Lead author Dr Fabio Motoki, of Norwich Business School at the University of East Anglia, said: “With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible.”
“The presence of political bias can influence user views and has potential implications for political and electoral processes.”
Read more at the Washington Post here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan