Google has egg on its face as it rushes to edit an advertisement for the Gemini AI tool before its broadcast during the Super Bowl. The ad was found to contain false information about cheese consumption likely “hallucinated” by the bumbling tech giant’s woke AI.
The Guardian reports that the tech giant’s Super Bowl commercial, which showcases how businesses can leverage “AI for every business,” depicts Gemini assisting a Wisconsin cheesemaker in crafting a product description. However, the ad included an erroneous statistic claiming that gouda accounts for “50% to 60% of global cheese consumption.”
The error was brought to light by blogger Nate Hake, who posted on X that the stat was an “AI hallucination” and “unequivocally false.” Hake pointed out that more reliable data suggests gouda is likely less popular than cheddar or mozzarella globally. He added, “I found the above AI slop example in 20 minutes, and on the first Super Bowl ad I tried fact-checking.”
The Verge has an interactive screenshot showing how Google erased Gemini's AI hallucination from its Super Bowl ad
— Nate Hake (@natejhake) February 5, 2025
Go to the article & hover your mouse to see for yourself! 👇 https://t.co/RJBUAAFEnq pic.twitter.com/grOEcgl8LL
In response, Google executive Jerry Dischler claimed that the inaccuracy was not a “hallucination” – a phenomenon where AI systems invent untrue information – but rather a reflection of the false information present in the websites Gemini scrapes for data. Dischler stated, “Gemini is grounded in the web – and users can always check the results and references. In this case, multiple sites across the web include the 50-60% stat.”
Google subsequently issued a statement explaining that they remade the ad to remove the error after consulting with the featured cheesemonger and asking how he would have preferred to handle the situation. “Following his suggestion to have Gemini rewrite the product description without the stat, we updated the user interface to reflect what the business would do,” the statement read.
This is not the first time Google’s AI tools have faced criticism for containing errors or providing unhelpful advice. In May of the previous year, the company’s AI overviews search feature came under fire for suggesting the use of “non-toxic glue” when users searched for ways to make cheese stick better to pizza. Additionally, AI-generated responses claimed that geologists recommended humans eat one rock per day.
Google’s Gemini tool also encountered issues last year when it was “paused” after the company admitted it had “definitely messed up.” The tool generated images depicting historical figures, including popes, U.S. founding fathers, and German World War II soldiers, as racial minorities. Furthermore, the Gemini chatbot provided inconsistent responses when comparing the harm caused by libertarians and Stalin. These incidents prompted negative commentary from conservatives, including Elon Musk.
Read more at the Guardian here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.