Google’s recently introduced AI search feature called “Search Generative Experience” (SGE) has been found to recommend malicious websites that redirect users to scams, fake giveaways, and unwanted browser extensions.
BleepingComputer reports that earlier this month, Google began rolling out its new AI-powered search feature, SGE, which provides quick summaries and site recommendations related to users’ search queries. However, the new system appears to have some significant flaws that cybersecurity experts are now bringing to light.
SEO consultant Lily Ray was among the first to notice that Google’s SGE was recommending spammy and malicious sites within its AI-generated responses. Upon further investigation by BleepingComputer, it was found that the suspicious sites shared similarities in their TLD usage (.online), HTML templates, and redirect practices, suggesting they are part of a coordinated SEO poisoning campaign.
Sundar Pichai, chief executive officer of Alphabet Inc., during the Google I/O Developers Conference in Mountain View, California, US, on Wednesday, May 10, 2023. Photographer: David Paul Morris/Bloomberg
When users click on these Google AI-recommended sites, they are taken through a series of redirects leading to various scams. Common destinations include fake captchas, YouTube-mimicking pages that trick visitors into subscribing to browser notifications, tech support scams, and fake giveaways. Browser notification scams are particularly problematic, as they allow the scammers to send a barrage of unwanted ads directly to the user’s desktop.
Some of the malicious redirects even attempt to push unwanted browser extensions that perform search hijacking and other potentially harmful actions. Meanwhile, the fake giveaway sites, such as those claiming to offer free iPhone 15 Pros, are designed to harvest personal information that can be sold to other scammers and marketers.
The conversational nature of Google’s AI-generated answers can make these malicious site recommendations seem more trustworthy to unsuspecting users. As AI becomes increasingly integrated into our online search experiences, it is clear that the information provided by these algorithms cannot be trusted blindly, and users must exercise caution before visiting any recommended sites.
Google has acknowledged the issue and stated that they are continuously updating their systems and ranking algorithms to combat spam. However, as scammers evolve their techniques to evade detection, it remains an ongoing challenge. Users are advised to be vigilant when clicking on AI-recommended sites and to unsubscribe from any unwanted browser notifications by managing their settings.
This is just the latest in a long-running trend of AI screwups by Google, once considered the mightiest AI company on earth before it was eclipsed by rivals including startup OpenAI, which launched ChatGPT well in advance of Google’s own generative AI products. Most recently, Google’s Gemini AI was launched, promptly attempting to erase white people from history.
Read more at BleepingComputer here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.