Back to articles

Study finds that major AI models from Google, OpenAI, and Anthropic replicate human bias toward individual stories over statistical evidence in moral decisions.

arXiv cs.CL · April 15, 2026

Study finds that major AI models from Google, OpenAI, and Anthropic replicate human bias toward individual stories over statistical evidence in moral decisions.

AI Summary

  • Researchers tested 16 frontier LLMs across 51,955 API trials to examine whether AI systems exhibit the Identifiable Victim Effect—a well-documented human tendency to prioritize helping a specific named person over helping a larger group with identical needs
  • Models from 9 organizations including Google, Anthropic, OpenAI, Meta, DeepSeek, xAI, Alibaba, IBM, and Moonshot were evaluated using 10 experiments adapted from canonical psychology studies
  • The findings suggest that as LLMs take on critical roles in humanitarian work, grant allocation, and content moderation, they may inherit the same emotional biases and irrationalities present in human moral reasoning

Related Articles

Stay ahead with AI news

Get curated AI news from 200+ sources delivered daily to your inbox. Free to use.

Get Started Free