Back to articles

Developers discover AI coding assistants struggle with messy legacy codebases — context limits force skilled engineers to supervise every suggestion

Hacker News · April 24, 2026

AI Summary

  • A software developer with 20+ years of experience at a medical-device company tested Claude (an AI coding assistant) on their 20-year-old codebase and found it fails repeatedly because the AI cannot retain context of the entire system across work sessions — it generates code bloat and contradicts existing patterns without understanding the bigger picture.
  • The core limitation: AI assistants like Claude are trained on clean, well-documented code examples, but real-world systems at established companies mix multiple coding styles, old design patterns, and undocumented decisions accumulated over decades. When the AI sees only a fragment of code in isolation, it makes suggestions that conflict with the system's hidden rules.
  • For developers working in regulated industries (healthcare, finance, manufacturing), this means AI cannot yet be a solo tool — a human expert must review every suggestion to catch errors that could break critical systems. This flips the productivity promise: instead of AI reducing work, it currently adds review overhead unless the developer already knows the entire codebase by heart.

Related Articles

Stay ahead with AI news

Get curated AI news from 200+ sources delivered daily to your inbox. Free to use.

Get Started Free