Dirty Busines: Google Profits from Ads for ‘Nudify’ AI Apps that Produce Deepfake Porn

young woman crying
Karolina Kaboompics/Pexels

Despite recent policy updates, Google Search continues to display promoted results for “nudify” AI apps that generate nonconsensual deepfake porn, raising concerns about the tech giant’s ability to combat harmful AI-powered content.

404Media reports that a recent investigation by Alexios Mantzarlis from the Faked Up newsletter has revealed that searching for terms such as “undress apps” and “best deepfake nudes” on Google yields promoted results for AI apps that produce nonconsensual nude images. This discovery comes on the heels of Google’s recent ad policy update, which explicitly prohibits this type of content, as well as an effort to derank search results for apps that generate nonconsensual material.

The promoted results highlight the ongoing struggle of major internet platforms to contain the proliferation of AI-powered apps that create deepfake porn, particularly targeting young women and celebrities. In this instance, Google Search not only directed users to these harmful apps but also profited from the apps’ paid placement against specific search terms.

In response to the findings, a Google spokesperson stated, “We reviewed the ad in question and permanently suspended the advertiser for violating our policies. Services that offer to create synthetic sexual or nude content are prohibited from advertising through any of our platforms or generating revenue through Google Ads.” The spokesperson confirmed that Google has prohibited both sexually-explicit content and content containing non-consensual sexual themes in ads, and referenced a May ad-policy update that prohibits ads for these services, even if the ads themselves are not explicit.

However, the spokesperson did not address why Google was selling promoted search results against queries like “undress app,” instead stating that their teams are actively investigating the issue and will permanently suspend advertisers who violate the policy, removing all their ads from Google’s platforms.

One of the promoted search results, which Google removed after being contacted for comment, directed users to a website offering an “NSFW AI Image Generator” and linked to a Telegram bot providing the same service. Another promoted result led to an AI image generator that, upon testing, produced deepfake sexual images from text prompts and revealed that names of female celebrities were among the “popular tags” in prompts. The same service also features an “image to image” model that generated an explicit AI image when provided with a photograph of a real person.

This news comes just days after Google announced an update to its Search ranking system, which was supposed to derank explicit fake content in results. In addition to the promoted results, many of the tested search queries also returned organic search results leading to other nudify apps. Google stated that the update is part of an ongoing effort, with more updates planned for the coming months, and that the recently announced update focused on search queries that included people’s names, as they have the highest potential for causing harm. The company added that if it receives a high volume of removal requests from a specific site under this policy, it will use that as a signal to demote the site in search results.

Read more at 404Media here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Authored by Lucas Nolan via Breitbart August 7th 2024