Pedophiles are using AI to generate “astoundingly realistic” child sexual abuse images that many people may find “indistinguishable” from real pictures, an online safety group has warned.
The Internet Watch Foundation (IWF) says it has already found very realistic AI-generated images depicting child sexual abuse, and that the technology could be used to generate “unprecedented quantities” of such content, according to a report by Sky News.
Moreover, the images are so realistic, that it may become more difficult to determine when real children are in danger, the IWF, which finds and removes child abuse content on the internet, warned.
The online sites the IWF investigated, some of which were reported by the public, reportedly featured images depicting children as young as three. The IWF said it even found an online “manual” to help perverts use AI to create more realistic child abuse images.
While it is illegal to generate these types of images in the UK, the online safety group says AI technology has been advancing so rapidly and is becoming so much more accessible, that it may soon be difficult for the law to keep up with the problem.
The UK’s National Crime Agency (NCA) said the risk is “increasing” and being taken “extremely seriously,” Sky News reported.
“There is a very real possibility that if the volume of AI-generated material increases, this could greatly impact on law enforcement resources, increasing the time it takes for us to identify real children in need of protection,” Chris Farrimond, the NCA’s director of threat leadership, said.
United Kingdom Prime Minister Rishi Sunak said the upcoming global AI summit will address and debate regulatory “guardrails” that could reduce future risks posed by AI.
A government spokesperson told Sky News the following:
AI-generated child sexual exploitation and abuse content is illegal, regardless of whether it depicts a real child or not, meaning tech companies will be required to proactively identify content and remove it under the Online Safety Bill, which is designed to keep pace with emerging technologies like AI.
“The Online Safety Bill will require companies to take proactive action in tackling all forms of online child sexual abuse including grooming, live-streaming, child sexual abuse material and prohibited images of children — or face huge fines,” the spokesperson added.
Breitbart News previously reported that AI-generated sexual content is a growing problem, as Facebook and TikTok allow sexual ads generated by AI to flood their platforms:
NBC News reports that social media platforms known for their stringent policies against sexualized content are now facing questions about their moderation systems. Facebook’s Instagram and TikTok, platforms that have historically banned ads for prostitution, pornography, and even educators discussing sexual health, are now home to a new breed of sexualized content — ads for AI-generated sexual images and “companionship.”
These ads, often explicit in nature, promise users “NSFW pics” and uncensored chats. They feature digitally created characters, often scantily clad and in provocative poses. What’s even more alarming is the use of popular children’s TV characters like SpongeBob SquarePants and Cookie Monster in some of these promotional materials.
Breitbart News will continue to report on the misapplication of cutting edge AI technology.
You can follow Alana Mastrangelo on Facebook and X/Twitter at @ARmastrangelo, and on Instagram.