Google’s Gemini AI has a White people problem

Generative AI may be proving that American society is far less racist than many in power assume

Gutfeld: Can Google be trusted when their credibility is busted?

'Gutfeld!' panelists react to Google pausing its image generation feature of its artificial intelligence (AI) tool, Gemini, after AI refuses to show images of White people.

By now we have all seen the frankly hilarious images of Black George Washington, South Asian popes, along with Gemini’s stubborn and bizarre inability to depict a White scientist or lawyer.

Much like Open AI’s ChatGPT before it, Gemini will gladly generate content heralding the virtues of Black, Hispanic or Asian people, and will decline to do so in regard to White people so as not to perpetuate stereotypes.

There are two main reasons why this is occurring. The first, flaws in the AI software itself, has been much discussed. The second, and more intractable problem, that of flaws in the original source material, has not.

Google Gemini

Engineers at AI companies have trained their software to "correct," or "compensate," for what they assume is the systemic racism that our society is rife with. (Betul Abali/Anadolu via Getty Images)

"Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here," Jack Krawczyk, senior director for Gemini Experiences has admitted.

RED-FACED GOOGLE APOLOGIZES AFTER WOKE AI BOT GIVES ‘APPALLING’ ANSWERS ABOUT PEDOPHILIA, STALIN

Ya think?

You see, the engineers at AI companies such as Google and Open AI have trained their software to "correct," or "compensate," for what they assume is the systemic racism and bigotry that our society is rife with. 

But the mainly 21st-century internet source material AI uses is already correcting for such bias. It is in large part this doubling down that produces the absurd and ludicrous images and answers that Gemini and ChatGPT are being mocked for. 

For well over a decade, online content creators such as advertisers and news outlets have sought to diversify the subjects of their content in order to redress supposed negative historical stereotypes.

SEN. TOM COTTON TORCHES GOOGLE AI SYSTEM AS ‘RACIST, PREPOSTEROUSLY WOKE, HAMAS-SYMPATHIZING’

It is this very content that AI generators scrub once again for alleged racism, and as a result, all too often, the only option left to AI to make the content "less racist" is to erase White people from results altogether.

In its own strange way, generative AI may be proving that American society is actually far less racist than those in positions of power assume.

This problem of source material also extends far beyond thorny issues of race, as Christina Pushaw, an aide to Florida Gov. Ron DeSantis, exposed in two prompts regarding COVID.

She first asked Gemini if opening schools spread COVID, and then if BLM rallies spread COVID. Nobody should be surprised to learn that the AI provided evidence of school openings spreading the virus and no evidence that BLM rallies did.

GOOGLE ADMITS ITS GEMINI AI ‘GOT IT WRONG’ FOLLOWING WIDELY PANNED IMAGE GENERATOR: NOT ‘WHAT WE INTENDED’

But here’s the thing. If you went back and aggregated the contemporaneous online news reporting from 2020 and 2021, these are exactly the answers that you would wind up with.

News outlets bent over backwards to deny that tens of thousands marching against alleged racism, and using public transportation to get there, could spread COVID, while chomping at the bit to prove in-class learning was deadly. 

In fact, there was so much abject online censorship of anything questioning the orthodoxies of the COVID lockdowns that the historical record upon which AI is built is all but irretrievably corrupted. 

This is an existential problem for the widespread use of artificial intelligence, especially in areas such as journalism, history, regulation and even legislation, because obviously there is no way to train AI to only use sources that "tell the truth."

CLICK HERE FOR MORE FOX NEWS OPINION

There is no doubt that in areas such as science and engineering AI opens up a world of new opportunities, but as far as intellectual pursuits go, we must be very circumspect about the vast flaws that AI introduces to our discourse. 

GhatGPT openAI logo

For now, at least, generative AI absolutely should not be used to create learning materials for our schools. (Reuters/Dado Ruvic/Illustration)

For now, at least, generative AI absolutely should not be used to create learning materials for our schools, breaking stories in our newspapers, or be anywhere within a 10,000-mile radius of our government.

It turns out the business of interpreting the billions of bits of information online to arrive at rational conclusions is still very much a human endeavor. It is still very much a subjective matter, and there is a real possibility that no matter how advanced AI becomes, it always will be.

This may be a hard pill to swallow for companies that have invested fortunes in generative AI development, but it is good news for human beings, who can laugh at the fumbling failures of the technology and know that we are still the best arbiters of truth.

More, it seems very likely that we always will be.

CLICK HERE TO READ MORE FROM DAVID MARCUS

David Marcus is a columnist living in West Virginia and the author of "Charade: The COVID Lies That Crushed A Nation."

Authored by David Marcus via FoxNews February 24th 2024