On Thursday, Google announced it is temporarily suspending its artificial intelligence chatbot Gemini from generating images of humans. This comes a day after the tech behemoth apologized for the “inaccuracies” of its large language model (LLM) in depicting human history.
Earlier this week, users shared screenshots of prompts on social media showing the AI depicting the US Founding Fathers and Nazi-era German soldiers, who were predominantly white, as people of color, leading to critics raising questions about whether Google’s LLM was overcorrecting as a result of the long-standing issues related to racial bias in AI.
Google Apologizes For Gemini’s “Inaccuracies” In Depicting Historical Human Figures
In a post made on X (formerly Twitter), Google acknowledged the issue and said it was working on a fix for Gemini’s image generation feature. The company added that it will be pausing the image generation of people until an “improved version” is re-released.
Studies have shown that AI image generators tend to amplify racial and gender stereotypes found in their training data, and are more likely to show lighter-skinned individuals when asked to generate a person in various contexts.
Google lauded Gemini’s ability to generate a “wide range of people” because users from around the world leverage the AI, but said the model was “missing the mark”.
The Silicon Valley giant began offering image generation through the Gemini platform earlier this month, much in line with similar offerings from its competitors like Microsoft-backed OpenAI’s ChatGPT.
Google’s Gemini Depicts 1940s German Soldiers and US Founding Fathers as Persons of Color
However, Gemini became the butt of the joke over the past few days after social media users whether it fails to produce historically accurate results in the attempt to maintain racial and gender diversity.
In one of the controversial queries, when asked to generate a picture of a Swedish woman or a picture of an American woman, Gemini would return results that appeared to overwhelmingly or exclusively show AI-generated persons of color.
Similarly, when the AI was prompted to produce images of historical figures like the Founding Fathers, it generated overwhelmingly non-white human figures that it promoted as diverse, which included Black and Native American persons.
A former Google engineer posted on X saying that it was “embarrassingly hard to get Google Gemini to acknowledge that white people exist.”
Another user who posted the image of a racially diverse group of 1940s German soldiers said “It’s a good thing to portray diversity” but it was a stupid move that Gemini “isn’t doing it in a nuanced way.”
While an entirely white-dominated result for a prompt like “a 1943 German soldier” would be historically accurate, Gemini happened to produce images of White, Asian, and Black persons in military uniforms.
Google Is Working on a Fix for the Issue
For now, Gemini simply refuses to generate pictures of people or even a big crowd. It responds by saying that it is “working to improve” the ability to do so. The chatbot further states that “we expect this feature to return soon and will notify you in release updates when it does.”
University of Washington researcher Sourojith Ghosh, who has been studying bias in AI image generators, said he is in favor of Google pausing Gemini’s human image generation feature.
“You are not going to overnight come up with a text-to-image generator that does not cause representational harm. They are a reflection of the society in which we live,” he said.