Google CEO Sundar Pichai has finally responded to the recent uproar over racially inaccurate and biased images and text answers generated by the company’s new ultra-woke Gemini AI system.
The Verge reports that in an internal memo to employees, Google CEO Sundar Pichai acknowledged that the images and text produced by the company’s Gemini AI have “offended our users and shown bias.” He stated plainly that this is “completely unacceptable” and that Google “got it wrong.”
The controversy began last week when users discovered that Gemini was generating historically inaccurate images of public figures. For example, it depicted Nazi-era German soldiers with racial diversity, Founding Fathers as non-white, and even showed Google’s own co-founders Larry Page and Sergey Brin with incorrect races. Google’s market value plunged by $90 billion following the scandal.
Wow, very educational pic.twitter.com/5mkzuEVtzn
— Michael Tracey (@mtracey) February 21, 2024
I asked Google Gemini to generate images of the Founding Fathers. It seems to think George Washington was black. pic.twitter.com/CsSrNlpXKF
— Patrick Ganley (@Patworx) February 21, 2024
In response, Google temporarily disabled Gemini’s image generation capabilities. Pichai’s memo marks the CEO’s first direct comments on the matter since it came to light. Pichai wrote that Google’s teams “have been working around the clock to address these issues.”
He admitted that “no AI is perfect, especially at this emerging stage of the industry’s development.” However, he said that the company knows “the bar is high” and will continue improving Gemini for “however long it takes.”
The memo outlined some of the steps Google is taking, including “structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations.” Pichai said the company will review what went wrong and “make the necessary changes.”
At the same time, Pichai emphasized Google’s “mission to organize the world’s information and make it universally accessible and useful.” He said this mission requires Google to provide “helpful, accurate, and unbiased information” in all its products, including AI ones.
Pichai also highlighted some of Google’s recent AI advancements. He said the underlying improvements to models and release of open source AI are being “well received.” It remains to be seen whether Google can fully restore faith in Gemini and its other AI efforts after this very public failure. For now, the company is clearly still grappling with the complexity of developing unbiased AI systems.
Read the full memo below:
Hi everyone
I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable and we got it wrong.
Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts. No Al is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.
Our mission to organize the world’s information and make it universally accessible and useful is sacrosanct. We’ve always sought to give users helpful, accurate, and unbiased information in our products. That’s why people trust them. This has to be our approach for all our products, including our emerging Al products.
We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.
Even as we learn from what went wrong here, we should also build on the product and technical announcements we’ve made in Al over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.
We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the Al wave. Let’s focus on what matters most: building helpful products that are deserving of our users’ trust.
Read more at the Verge here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.