記事一覧に戻る

Researchers analyze LLM reasoning steps to uncover hidden stigma toward individuals with psychological conditions

arXiv cs.CL · 2026年4月29日

AI要約

  • A study by researchers including Sreehari Sankar, Aliakbar Nafar, and others evaluated how large language models (LLMs—AI systems that understand and generate text) express stigma toward people with mental health conditions by examining the intermediate reasoning steps (the logic the AI uses to arrive at answers) rather than relying only on multiple-choice questions.
  • The researchers used clinical expertise to categorize stigmatizing language patterns, rate severity to distinguish between overt prejudice and subtle biases, and extended an existing mental health stigma benchmark by adding more psychological conditions to broaden the evaluation scope.
  • The findings show that analyzing model reasoning exposes substantially more stigma than traditional multiple-choice methods and helps identify flaws in how LLMs understand and reason about mental health conditions.

関連記事

AIニュースを毎日お届け

200以上のソースから厳選したAIニュースを毎日無料でお届けします。

無料で始める