Back to articles

Despite theoretical capacity, large language models exhibit human-like working memory limitations that worsen under cognitive load.

arXiv cs.LG · April 14, 2026

Despite theoretical capacity, large language models exhibit human-like working memory limitations that worsen under cognitive load.

AI Summary

  • Pretrained LLMs struggle with working memory tasks despite transformers having full attention access to prior context, unlike simpler two-layer transformers that master these tasks perfectly
  • LLMs reproduce specific human working memory interference patterns: performance degrades with increased memory load and is biased by recency and stimulus statistics
  • Research across multiple models shows a correlation between stronger working memory capacity and broader overall competence, suggesting this is a fundamental limitation affecting AI reasoning

Related Articles

Stay ahead with AI news

Get curated AI news from 200+ sources delivered daily to your inbox. Free to use.

Get Started Free