Despite theoretical capacity, large language models exhibit human-like working memory limitations that worsen under cognitive load.
arXiv cs.LG · April 14, 2026
AI Summary
•Pretrained LLMs struggle with working memory tasks despite transformers having full attention access to prior context, unlike simpler two-layer transformers that master these tasks perfectly
•LLMs reproduce specific human working memory interference patterns: performance degrades with increased memory load and is biased by recency and stimulus statistics
•Research across multiple models shows a correlation between stronger working memory capacity and broader overall competence, suggesting this is a fundamental limitation affecting AI reasoning