Google’s AI image generator overcompensates for racial bias, leading to historical inaccuracies



summary
Summary

  • Google has stopped generating images of people until the issue is resolved.

AI image generators have often been criticized for social and racial bias. In an attempt to counter this, Google may have gone a bit too far.

Racial and social bias in AI imaging systems has been demonstrated repeatedly in research and practice, sometimes even in scenarios where you wouldn’t look for it. They exist because AI systems adopt biases embedded in the training data. This can help reinforce biases, propagate existing biases, and even create new biases.

Large AI companies in particular struggle with bias if they want to deploy AI responsibly. Since they cannot easily remove it from the training data, they must find workarounds.

For example, OpenAI has ChatGPT rewrite user-entered prompts for the image AI DALL-E 3 to make the images more diverse. For example, if you ask the image system to generate a photo of a typical American, you will typically get images of white, Asian, and black people.

Ad

Ad

Google appears to use a similar mechanism, but this can lead to incorrect images in a historical context. For example, a query for “photo of a German soldier 1943” may also return stylized images of blacks and Asians in Nazi-like uniforms, and a prompt for the pope may return images of a non-white pope.

Image: Screenshot via X

Google admits a mistake

There is an argument to be made that the generation of historically accurate images is not what generative AI is for. After all, hallucinations — machine fantasies, if you will — are a feature of these AI systems, not a bug.

But of course, this argument plays into the hands of conservative forces who accuse AI companies of being too “woke” and racist against white people for seeking diversity.

Gemini product manager Jack Krawczyk speaks of “inaccuracies in some historical image generation depictions” and promises a quick remedy. Google also acknowledges a mistake. In general, it is good that Gemini generates images of people from different backgrounds, Google says. But in a historical context, “it’s missing the mark.” The company has paused the generation of images of people until the issue is resolved.

Renowned developer John Carmack, who works on AI systems himself, calls for more transparency in AI guidelines: “The AI behavior guardrails that are set up with prompt engineering and filtering should be public — the creators should proudly stand behind their vision of what is best for society and how they crystallized it into commands and code. I suspect many are actually ashamed,” Carmack writes.

Recommendation

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top