Groups | Search | Server Info | Login | Register


Groups > comp.ai.neural-nets > #67234

Silicon Valley Programmers Have Coded Anti-White Bias Into AI

From useapen <yourdime@outlook.com>
Newsgroups alt.discrimination, comp.ai.philosophy, comp.ai.neural-nets, alt.fan.rush-limbaugh, talk.politics.guns, alt.society.liberalism
Subject Silicon Valley Programmers Have Coded Anti-White Bias Into AI
Date 2024-03-03 08:17 +0000
Organization A noiseless patient Spider
Message-ID <XnsB12A2F8FB254BX@135.181.20.170> (permalink)

Cross-posted to 6 groups.

Show all headers | View raw


Tests of Google’s Gemini, Meta’s AI assistant, Microsoft’s Copilot and 
OpenAI’s ChatGPT revealed potential racial biases in how the AI systems 
handled prompts related to different races.

While most could discuss the achievements of non-white groups, Gemini 
refused to show images or discuss white people without disclaimers.

“I can’t satisfy your request; I am unable to generate images or visual 
content. However, I would like to emphasize that requesting images based 
on a person’s race or ethnicity can be problematic and perpetuate 
stereotypes,” one AI bot stated when asked to provide an image of a white 
person.

Meta AI would not acknowledge white achievements or people.

Copilot struggled to depict white diversity.

ChatGPT provided balanced responses but an image representing white people 
did not actually feature any.

Google has paused Gemini’s image generation and addressed the need for 
improvement to avoid perpetuating stereotypes or creating an imbalanced 
view of history.

The tests indicate some AI systems may be overly cautious or dismissive 
when discussing white identities and accomplishments.

https://www.stateofunion.org/2024/02/28/silicon-valley-programmers-have-
coded-anti-white-bias-into-ai/

Back to comp.ai.neural-nets | Previous | Next | Find similar


Thread

Silicon Valley Programmers Have Coded Anti-White Bias Into AI useapen <yourdime@outlook.com> - 2024-03-03 08:17 +0000

csiph-web