Close Menu
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

Ex-L3Harris Cyber Boss Pleads Guilty to Selling Trade Secrets to Russian Firm

29 October 2025

AI Agents Are Terrible Freelance Workers

29 October 2025

The Microsoft Azure Outage Shows the Harsh Reality of Cloud Failures

29 October 2025
Facebook X (Twitter) Instagram
Just In
  • Ex-L3Harris Cyber Boss Pleads Guilty to Selling Trade Secrets to Russian Firm
  • AI Agents Are Terrible Freelance Workers
  • The Microsoft Azure Outage Shows the Harsh Reality of Cloud Failures
  • Save $30 on This All-Clad Nonstick Frying Pan Set
  • The Pixel Watch 3 Is $100 Off
  • South Of Midnight’s Lead Actress On Hazel’s Journey And Her First Golden Joystick Nomination
  • How to Keep Subways and Trains Cool in an Ever Hotter World
  • Apple’s Family Sharing Helps Keep Children Safe. Until It Doesn’t
Facebook X (Twitter) Instagram Pinterest Vimeo
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release
Subscribe
Best in TechnologyBest in Technology
Home » Google Tells Anti-Woke Babies That Gemini’s Black Vikings Missed The Mark
AI

Google Tells Anti-Woke Babies That Gemini’s Black Vikings Missed The Mark

News RoomBy News Room21 February 20244 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Google’s AI chatbot Gemini has a unique problem. It has a hard time producing pictures of white people, often turning Vikings, founding fathers, and Canadian hockey players into people of color. This sparked outrage from the anti-woke community, claiming racism against white people. Today, Google acknowledged Gemini’s error.

“We’re working to improve these kinds of depictions immediately,” said Google Communications in a statement. “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Users pointed out that Gemini would, at times, refuse requests when specifically asked to create images of white people. However, when requests were made for images of Black people, Gemini had no issues. This resulted in an outrage from the anti-woke community on social media platforms, such as X, calling for immediate action.

Google’s acknowledgment of the error is, to put it lightly, surprising, given that AI image generators have done a terrible job at depicting people of color. An investigation from The Washington Post found that the AI image generator, Stable Diffusion, almost always identified food stamp recipients as Black, even though 63% of recipients are white. Midjourney came under criticism from a researcher when it repeatedly failed to create a “Black African doctor treating white children,” according to NPR.

Where was this outrage when AI image generators disrespected Black people? Gizmodo found no instances of Gemini depicting harmful stereotypes of white people, but the AI image generator simply refused to create them at times. While a failure to generate images of a certain race is certainly an issue, it doesn’t hold a candle to the AI community’s outright offenses against Black people.

OpenAI even admits in Dall-E’s training data that its AI image generator “inherits various biases from its training data, and its outputs sometimes reinforce societal stereotypes.” OpenAI and Google are trying to fight these biases, but Elon Musk’s AI chatbot Grok seeks to embrace them.

Musk’s “anti-woke chatbot” Grok is unfiltered for political correctness. He claims this is a realistic, honest AI chatbot. While that may be true, AI tools can amplify biases in ways we don’t quite understand yet. Google’s blunder on generating white people seems likely to be a result of these safety filters.

Tech is historically a very white industry. There is no good modern data on diversity in tech, but 83% of tech executives were white in 2014. A study from the University of Massachusetts found tech’s diversity may be improving but is likely lagging behind other industries. For these reasons, it makes sense why modern technology would share the biases of white people.

One case where this comes up, in a very consequential way, is facial recognition technology (FRT) used by police. FRT has repeatedly failed to distinguish Black faces and shows a much higher accuracy with white faces. This is not hypothetical, and it’s not just hurt feelings involved. The technology resulted in the wrongful arrest and jailing of a Black man in Baltimore, a Black mother in Detroit, and several other innocent people of color.

Technology has always reflected those who built it, and these problems persist today. This week, Wired reported that AI chatbots from the “free speech” social media network Gab were instructed to deny the holocaust. The tool was reportedly designed by a far-right platform, and the AI chatbot seems in alignment.

There’s a larger problem with AI: these tools reflect and amplify our biases as humans. AI tools are trained on the internet, which is full of racism, sexism, and bias. These tools are inherently going to make the same mistakes our society has, and these issues need more attention drawn to them.

Google seems to have increased the prevalence of people of color in Gemini’s images. While this deserves a fix, this should not overshadow the larger problems facing the tech industry today. White people are largely the ones building AI models, and they are, by no means, the primary victims of ingrained technological bias.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleSea Of Thieves Sets Sail For PlayStation 5 This April
Next Article The Google Pixel Fold 2 just leaked. Here’s everything that’s new

Related Articles

AI

Doom vs Boom: The Battle to Enshrine AI’s Future Into California Law

24 June 2024
AI

Perplexity Is Reportedly Letting Its AI Break a Basic Rule of the Internet

20 June 2024
AI

Anthropic Says New Claude 3.5 AI Model Outperforms GPT-4 Omni

20 June 2024
AI

Call Centers Introduce ‘Emotion Canceling’ AI as a ‘Mental Shield’ for Workers

18 June 2024
AI

AI Turns Classic Memes Into Hideously Animated Garbage

17 June 2024
AI

May ‘AI’ Take Your Order? McDonald’s Says Not Yet

17 June 2024
Demo
Top Articles

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024107 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views

5 laptops to buy instead of the M4 MacBook Pro

17 November 202494 Views

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Latest News
Gaming

South Of Midnight’s Lead Actress On Hazel’s Journey And Her First Golden Joystick Nomination

News Room29 October 2025
News

How to Keep Subways and Trains Cool in an Ever Hotter World

News Room29 October 2025
News

Apple’s Family Sharing Helps Keep Children Safe. Until It Doesn’t

News Room29 October 2025
Most Popular

The Spectacular Burnout of a Solar Panel Salesman

13 January 2025131 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024107 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views
Our Picks

Save $30 on This All-Clad Nonstick Frying Pan Set

29 October 2025

The Pixel Watch 3 Is $100 Off

29 October 2025

South Of Midnight’s Lead Actress On Hazel’s Journey And Her First Golden Joystick Nomination

29 October 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Best in Technology. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.