News

Google's AI Technology Says Depicting White People Is "Harmful"

Countless people on X are sharing examples from Google's AI image-generating technology, and the results suggest a strong bias against white people.

By Carmen Schober1 min read
Pexels/Noureddine Dahmani

In one case, after it was prompted to create images of a white man, Google's AI claimed it couldn't create images of people because it could cause "harmful stereotypes." However, the program had no issue creating images of people of other ethnicities.

In another instance, an X user shared screenshots that seemed to suggest that the program could easily create images of a "proud African family," but then it refused to create a similar image of a "proud European family." Since there's nothing inherently "harmful" about images of white people or Europeans, Google AI's seemingly arbitrary guidelines indicate that the program was trained to favor an extremely divisive and leftwing approach to race and representation.

As AI continues to develop at warp speed, users should be aware of the algorithms' built-in biases, particularly those regarding the leftwing ideologies that are often aggressively pushed by legacy media groups and technocrats. This issue also touches on the broader question of how machine learning algorithms are trained and the data sets they are exposed to during development. Obviously, if an AI program is trained to push a leftist political ideology, the results will almost always be inaccurate and exclusionary, as users are seeing with Google's AI.

The next challenge for Google appears to be developing AI that can understand and appropriately apply ethical considerations and historical and cultural contexts to the user's creation process without pushing a political agenda. Let's see if they can correct course and deliver.

Evie deserves to be heard. Support our cause and help women reclaim their femininity by subscribing today.