Researchers at security firm SentinelOne have uncovered a large-scale spam campaign that leveraged an OpenAI chatbot to generate unique messages, bypassing spam filters and inundating more than 80,000 websites over a four-month period. Once again crooks are at the vanguard of exploiting AI for illicit gains.
Ars Technica reports that researchers at SentinelOne’s SentinelLabs have revealed that spammers exploited OpenAI’s chatbot to launch a massive spam campaign targeting over 80,000 websites. The findings, published in a blog post on Wednesday, shed light on how the same capabilities that make large language models (LLMs) valuable for legitimate purposes can also be harnessed for malicious activities with equal ease.
The spam campaign, orchestrated by a framework called AkiraBot, aimed to promote dubious search engine optimization (SEO) services to small and medium-sized websites. By leveraging OpenAI’s chat API tied to the gpt-4o-mini model, AkiraBot generated unique messages tailored to each targeted website, effectively circumventing spam detection filters that typically block identical content sent en masse.
To achieve this, AkiraBot assigned the role of a “helpful assistant that generates marketing messages” to OpenAI’s chat API. It then provided prompts instructing the LLM to replace variables with the site name at runtime. Consequently, each message body included the recipient website’s name and a concise description of its services, creating the illusion of a curated message.
SentinelLabs researchers Alex Delamotte and Jim Walter emphasized the emerging challenges posed by AI in defending against spam attacks. They noted that the rotating set of domains used to promote the SEO offerings was the easiest indicator to block, as the spam message contents no longer followed a consistent approach as in previous campaigns.
The scale of the campaign was revealed through log files left by AkiraBot on a server, which tracked success and failure rates. The data showed that unique messages were successfully delivered to more than 80,000 websites between September 2024 and January 2025. In contrast, messages targeting approximately 11,000 domains failed.
OpenAI acknowledged the researchers’ findings and reiterated that such use of their chatbots violates their terms of service. The company revoked the spammers’ account upon receiving the disclosure from SentinelLabs. However, the fact that the activity went unnoticed for four months highlights the reactive nature of enforcement rather than proactive measures to prevent abuse.
Read more at Ars Technica here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.