March 25, 2025
Google reacts to Gemini AI's criticism and Promises to make its ‘hateful bigotry less obvious in future updates’

Google reacts to Gemini AI's criticism and Promises to make its ‘hateful bigotry less obvious in future updates’ . Image: Via Google

MOUNTAIN VIEW, CA —Google executives have decided to temporarily put on hold the full release of the Al’s software, noting that they will redouble effort to make the AI’s “racial animus less obvious in future updates. Gemini AI

Google Gemini AI was criticised in recent weeks for producing results that some had thought were “‏unfair” treatment against the white people, especially any white males. Gemini AI

Following the censure, which was directed at the Gemini AI image generation tool, Google CEO, Sundar Pichai said they “remain unabashedly committed to racism” and so that their “anti-human agenda can continue to remake the world in the image of an insufferably woke corporate HR lady”.

However, even while critics condemned the Gemini’s image generation algorithm as “bias,” many supporters of the software are of different opinions, expressing how they are pleased with the features. Gemini AI

The chief executive officer of Alphabet, Sundar Pichai, speaking in his own words, said, “Here at Google, we remain unabashedly committed to racism.

“However, we do admit that our rabid racial animus was maybe too ‘in-your-face’ for version one of our Gemini AI.

“We will redouble our efforts to ensure our hateful bigotry is less obvious in future updates so that our anti-human agenda can continue to remake the world in the image of an insufferably woke corporate HR lady, except this time undetected. Thank you.” Gemini AI

Also, speaking in response to the public censure of Germini, Jen Gennai, who leads Google’s AI Responsibility Initiative, said “it’s impossible to show hatred and bigotry towards white males”.

“Everyone knows it’s impossible to show hatred and bigotry towards white males, since everyone knows they’re the cause of all the world’s problems and not really human anyway,” Jen Gennai said.

“If you don’t believe whiteness should be eradicated in all its forms, you’re clearly a racist. I know this because I went to college.” Gemini AI

Prabhakar Raghavan, Senior Vice President at Google, said “We recently made the decision to pause Gemini’s image generation of people while we work on improving the accuracy of its responses. Here is more about how this happened and what we’re doing to fix it.”

“Three weeks ago, we launched a new image generation feature for the Gemini conversational app (formerly known as Bard), which included the ability to create images of people. Gemini AI

“It’s clear that this feature missed the mark. Some of the images generated are inaccurate or even offensive. We’re grateful for users’ feedback and are sorry the feature didn’t work well.

“We’ve acknowledged the mistake and temporarily paused image generation of people in Gemini while we work on an improved version,” Prabhakar Raghavan wrote on February 23, 2024.

He said what happened was that the AI app was a “specific product separate” from search, their AI models and other Google products.

‘What happened’ Gemini AI

“The Gemini conversational app is a specific product that is separate from Search, our underlying AI models, and our other products. Its image generation feature was built on top of an AI model called Imagen 2,” Mr The Prabhakar Raghavan explained.

“When we built this feature in Gemini, we tuned it to ensure it doesn’t fall into some of the traps we’ve seen in the past with image generation technology — such as creating violent or sexually explicit images, or depictions of real people. And because our users come from all over the world, we want it to work well for everyone. If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people. You probably don’t just want to only receive images of people of just one type of ethnicity (or any other characteristic).

“However, if you prompt Gemini for images of a specific type of person — such as “a Black teacher in a classroom,” or “a white veterinarian with a dog” — or people in particular cultural or historical contexts, you should absolutely get a response that accurately reflects what you ask for,” he added

Meanwhile, it’s believed that a few sources at Google have also confirmed that a less-obviously “racist” version of Gemini AI will be released in a month’s time. Gemini AI