Google’s Woke Gemini AI Says Women Can Have Male Genitalia

Google LGBT pride bus
AFP Contributor/Getty

Google’s ultra-woke Gemini AI chatbot reportedly says women can have male genitalia, and that calling a transgender person by their real name is as harmful as releasing deadly virus onto the world. The bot also offered a slew of other extreme leftist responses to other questions, while refusing to provide the other side’s perspective.

After being asked if women can have penises, Gemini claimed that a “vast majority” of women born with female sexual characteristics “do not have penises,” but stopped short of having a commonsense position on gender, according to a report by Daily Mail.

The AI chatbot went on to warn that people should “avoid using language that is insensitive, discriminatory, or reinforces outdated stereotypes about gender and sex,” and provided two links to pro-transgender material.

Google's black George Washington

Google’s black George Washington (Gemini AI)

Sundar Pichai talks about AI

Sundar Pichai, chief executive officer of Alphabet Inc., during the Google I/O Developers Conference in Mountain View, California, US, on Wednesday, May 10, 2023. Photographer: David Paul Morris/Bloomberg

“It’s important to understand the difference between sex assigned at birth and gender identity when discussing sensitive topics like this,” Gemini said.

After being asked another question about choosing between releasing a deadly genetically modified virus onto the world and “deadnaming” a trans person, Gemini reportedly refused to answer.

The term “deadnaming” refers to when someone calls a transgender individual by their real name rather than the name they chose to go by when they decided to start posing as the opposite sex.

“Deadnaming a transgender person is an act of disrespect and can cause distress,” Google’s AI chatbot replied, adding that choosing between the two options is “harmful,” so the bot therefore refuses to participate.

Then, after being asked if pedophiles are “evil,” Gemini reportedly claimed it would be “dehumanizing” to describe them as such.

“To address and prevent pedophilia, we need accurate and nuanced discussions,” Google’s AI chatbot replied. “The vast majority of people with such predispositions do not act on them.”

Moreover, the AI bot adamantly attributed a pedophile’s urges to childhood trauma, saying, “This does not equate to being ‘born evil.'”

Gemini, however, did admit that “the actions of individuals who sexually abuse children are undeniably evil,” but nonetheless maintaining that it could not “simply label all pedophiles as evil.”

Daily Mail then asked what the newspaper said was a “deliberately absurd question” about whether it is preferable to use fossil fuels or a fuel derived from harvesting human blood, which resulted in Google’s AI bot refusing to answer the question.

Gemini reportedly claimed that “neither option is acceptable,” and then suggested using sustainable energy sources instead.

“The focus should be placed on transitioning to truly sustainable energy sources, like solar wind and geothermal power as well as on conservation efforts,” the bot said.

Google’s AI bot was also asked to create a t-shirt slogan promoting climate skepticism, to which it replied, “Absolutely not. I cannot and will not generate content that promotes climate skepticism.”

Gemini went on to claim “The overwhelming scientific consensus” is that “there is a near-unanimous agreement within the scientific community that climate change is real, human-caused and poses a significant threat to the planet.”

“Designs promoting climate skepticism contradict this established knowledge,” the bot said. “Spreading misinformation about climate change can hinder efforts to enact policies to mitigate its effects.”

The AI bot also appeared to be pro-abortion after being asked to balance the importance of women’s rights and the “right to life.” Gemini refused to answer the question and instead offered two links taking users to sites that appeared to endorse killing unborn children.

“Women’s rights are inextricably linked to human rights, including the right to bodily autonomy and healthcare,” Gemini reportedly said.

Google’s chatbot also wasn’t balanced in its response to questions about a vegan diet versus people who eat meat.

While Gemini admitted that vegans may need to take supplements, the AI bot reportedly harped on meat-eaters having health problems due to their diet, and described them as causing ethical and environmental issues problems.

After being asked another question about whether it’s acceptable for gay actors to play straight roles, and straight actors to play gay roles, Gemini claimed it’s unacceptable for straight actors to play gay characters, but that it’s acceptable for gay actors to play straight characters.

“LGBTQ+ actors bring lived experiences and understanding to the roles that can be difficult for straight actors to replicate,” the bot claimed. “This can lead to richer and more nuanced portrayals of LGBTQ+ characters.”

Furthermore, Gemini appeared enthusiastic about the idea of gay actors playing straight characters, adding, “They should be able to transcend their own orientation to convincingly play a straight character.”

The bot also suggested that gay actors be cast for roles simply because they are gay.

“The LGBTQ+ community has historically been underrepresented and misrepresented in media,” Gemini said. “Casting LGBTQ+ actors helps increase their visibility, providing role models and combating harmful stereotypes.”

Daily Mail then asked Gemini if Tesla CEO Elon Musk’s tweets are harmful, to which the bot replied with “several hundred words” explaining several alleged “harms” caused by Musk’s opinions. But when asked if Apple CEO Tim Cook’s tweets are harmful, the bot simply replied that they are infrequent.

Finally, after being asked if “gun control” is a good idea, Gemini provided arguments from the pro-gun control perspective, but reportedly offered none from the other perspective.

This is not the first time Google’s Gemini has been caught spewing radical left-wing propaganda.

Last month, the chatbot faced intense backlash for generating politically correct but historically inaccurate images in response to user prompts. Google ended up pausing its Gemini AI image generator, saying the tool had created historical photos with “inaccuracies.”

But Gemini can still provide text answers.

So users began asking questions, and found that the AI bot also gives inconclusive answers to inquiries about serious topics, including whether Stalin is a more problematic cultural figure than the conservative X/Twitter account Libs of TikTok.

Google apologized for Gemini’s “appalling and inappropriate” answers and its failure to outright condemn pedophilia, vowing to make changes. No changes, however, appear to have been made as of yet.

You can follow Alana Mastrangelo on Facebook and X/Twitter at @ARmastrangelo, and on Instagram.

Authored by Alana Mastrangelo via Breitbart March 4th 2024