A tech entrepreneur recently highlighted an absurd example of AI bias. He pointed out that certain mainstream AI models, when asked to compare the severity of misgendering a public figure versus a global thermonuclear catastrophe that wipes out humanity, would actually rank the former as worse. This ridiculous scenario exposes how deeply some AI systems have been programmed with skewed priorities that defy basic logic and survival instincts. It's a stark reminder that these tools reflect their training data's biases rather than objective reasoning.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
9 Likes
Reward
9
10
Repost
Share
Comment
0/400
SudoRm-RfWallet/
· 11-06 13:02
Haha, with humanity gone, who cares about gender anymore?
View OriginalReply0
fren.eth
· 11-06 08:02
Which foolish AI training set… incredible.
View OriginalReply0
down_only_larry
· 11-05 02:20
Laughing to death, AI with this IQ still wants to dominate humanity?
View OriginalReply0
ContractBugHunter
· 11-04 22:44
This AI is toxic... completely missed the point.
View OriginalReply0
AirdropHustler
· 11-04 20:28
Laughing to death, AI is more serious than the extinction of humanity.
View OriginalReply0
DaisyUnicorn
· 11-04 20:28
This little AI petal's brain has really been poisoned by the training data~
View OriginalReply0
LiquidationKing
· 11-04 20:13
What is this all about?
View OriginalReply0
SchrodingerWallet
· 11-04 20:05
The IQ tax also needs to be scientific.
View OriginalReply0
StableGenius
· 11-04 20:04
smh... empirically speaking this is *exactly* what happens when u let the woke mob train ur ai models
Reply0
TokenEconomist
· 11-04 20:02
actually, this perfectly illustrates the principal-agent problem in AI alignment... let me break down the math of why this fails
A tech entrepreneur recently highlighted an absurd example of AI bias. He pointed out that certain mainstream AI models, when asked to compare the severity of misgendering a public figure versus a global thermonuclear catastrophe that wipes out humanity, would actually rank the former as worse. This ridiculous scenario exposes how deeply some AI systems have been programmed with skewed priorities that defy basic logic and survival instincts. It's a stark reminder that these tools reflect their training data's biases rather than objective reasoning.