Featured

China, Iran-based threat actors have found new ways to to use American AI models for covert influence: Report

The new threat report from OpenAI shows how adversaries are using AI models, such as ChatGPT, to influence global politics

China can't be underestimated, former national security adviser warns

Former Trump national security adviser Robert O'Brien discusses China's military buildup on 'The Story.'

Threat actors, some likely based in China and Iran, are formulating new ways to hijack and utilize American artificial intelligence (AI) models for malicious intent, including covert influence operations, according to a new report from OpenAI.

The February report includes two disruptions involving threat actors that appear to have originated from China. According to the report, these actors have used, or at least attempted to use, models built by OpenAI and Meta.

In one example, OpenAI banned a ChatGPT account that generated comments critical of Chinese dissident Cai Xia. The comments were posted on social media by accounts that claimed to be people based in India and the U.S. However, these posts did not appear to attract substantial online engagement.

That same actor also used the ChatGPT service to generate long-form Spanish news articles that "denigrated" the U.S. and were subsequently published by mainstream news outlets in Latin America. The bylines of these stories were attributed to an individual and, in some cases, a Chinese company.

CHINA, IRAN AND RUSSIA CONDEMNED BY DISSIDENTS AT UN WATCHDOG'S GENEVA SUMMIT

Chinese and Iranian flag with an AI hacker

Threat actors across the globe, including those based in China and Iran, are finding new ways to utilize American AI models for malicious intent.  (Bill Hinton/PHILIP FONG/AFP/Maksim Konstantinov/SOPA Images/LightRocket via Getty Images)

During a recent press briefing that included Fox News Digital, Ben Nimmo, Principal Investigator on OpenAI’s Intelligence and Investigations team, said that a translation was listed as sponsored content on at least one occasion, suggesting that someone had paid for it.

OpenAI says this is the first instance in which a Chinese actor successfully planted long-form articles in mainstream media to target Latin American audiences with anti-U.S. narratives.

"Without a view of that use of AI, we would not have been able to make the connection between the tweets and the web articles," Nimmo said.

He added that threat actors sometimes give OpenAI a glimpse of what they’re doing in other parts of the internet because of how they use their models.

"This is a pretty troubling glimpse into the way one non-democratic actor tried to use democratic or U.S.-based AI for non-democratic purposes, according to the materials they were generating themselves," he continued.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

China spy scare

The flag of China is flown behind a pair of surveillance cameras outside the Central Government Offices in Hong Kong, China, on Tuesday, July 7, 2020. Hong Kong leader Carrie Lam defended national security legislation imposed on the city by China last week, hours after her government asserted broad new police powers, including warrant-less searches, online surveillance and property seizures.  (Roy Liu/Bloomberg via Getty Images)

The company also banned a ChatGPT account that generated tweets and articles that were then posted on third-party assets publicly linked to known Iranian IOs (input/output). IO is the process of moving data between a computer and the outside world, including the movement of audio, video, software, and text.  

These two operations have been reported as separate efforts.

"The discovery of a potential overlap between these operations - albeit small and isolated - raises a question about whether there is a nexus of cooperation amongst these Iranian IOs, where one operator may work on behalf of what appear to be distinct networks," the threat report states.

In another example, OpenAI banned a set of ChatGPT accounts that were using OpenAI models to translate and generate comments for a romance baiting network, also known as "pig butchering," across platforms like X, Facebook and Instagram. After reporting these findings, Meta indicated that the activity appeared to originate from a "newly stood up scam compound in Cambodia.

WHAT IS CHINESE AI STARTUP DEEPSEEK?

The OpenAI ChatGPT logo is seen on a mobile phone

The OpenAI ChatGPT logo is seen on a mobile phone in this photo illustration on May 30, 2023 in Warsaw, Poland.  ((Photo by Jaap Arriens/NurPhoto via Getty Images))

Last year, OpenAI became the first AI research lab to publish reports on efforts to prevent abuse by adversaries and other malicious actors by supporting the U.S., allied governments, industry partners, and stakeholders.

OpenAI says it has greatly expanded its investigative capabilities and understanding of new types of abuse since its first report was published and has disrupted a wide range of malicious uses.

The company believes, among other disruption techniques, that AI companies can glean substantial insights on threat actors if the information is shared with upstream providers, such as hosting and software providers, as well as downstream distribution platforms (social media companies and open-source researchers).

OpenAI stresses that their investigations also benefit greatly from the work shared by peers.

"We know that threat actors will keep testing our defenses. We’re determined to keep identifying, preventing, disrupting and exposing attempts to abuse our models for harmful ends," OpenAI stated in the report. 

Nikolas Lanum is a reporter for Fox News Digital.

Authored by Nikolas Lanum via FoxNews February 21st 2025