A recent report has uncovered a concerning trend in the development of artificial intelligence image generators, revealing the use of explicit photos of children in their training datasets.
AP News reports that The Stanford Internet Observatory, in collaboration with the Canadian Centre for Child Protection and other anti-abuse charities, conducted a study that found more than 3,200 images of suspected child sexual abuse in the AI database LAION. LAION, an index of online images and captions, has been instrumental in training leading AI image-makers such as Stable Diffusion.
teenagers look at phone ( NICHOLAS KAMM/Getty)
This discovery has raised alarms across various sectors, including schools and law enforcement. The child pornography has enabled AI systems to produce explicit and realistic imagery of fake children and transform social media photos of real teens into deepfake nudes. Previously, it was believed that AI tools produced abusive imagery by combining adult pornography with benign photos of kids. However, the direct inclusion of explicit child images in training datasets presents a more direct and disturbing reality.
The issue is compounded by the competitive rush in the generative AI market, leading to hasty releases of AI tools without sufficient safety measures. Despite LAION’s immediate response of temporarily removing its datasets following the report, the concern remains about the lasting impact and widespread accessibility of these tools.
Stability AI, a notable user of LAION’s dataset, has implemented stricter controls in newer versions of its Stable Diffusion models. However, older versions without these safeguards continue to circulate and are used for generating explicit content. The study emphasizes the difficulty in rectifying this problem due to the open-source nature of many AI models and the ease of their distribution.
Breitbart news reported on November on how AI-generated deepfake pornography of minors can impact their lives. A deepfake scandal rocked a New Jersey high school:
One concerned parent, Dorota Mani, expressed her fears for her daughter’s future, stating, “I am terrified by how this is going to surface and when. My daughter has a bright future and no one can guarantee this won’t impact her professionally, academically or socially.” Mani’s daughter, Francesca, was among those whose image was used to generate deepfake pornography.
The technology behind these images is alarmingly accessible, with numerous free AI-backed image generators available online. The sophistication of these tools means that the deepfakes are increasingly realistic, making it difficult to distinguish them from authentic images.
Read more at AP News here.
AP contributed to this report
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.