記事一覧に戻る

Developers discover AI coding assistants struggle with messy legacy codebases — context limits force skilled engineers to supervise every suggestion

Hacker News · 2026年4月24日

AI要約

  • A software developer with 20+ years of experience at a medical-device company tested Claude (an AI coding assistant) on their 20-year-old codebase and found it fails repeatedly because the AI cannot retain context of the entire system across work sessions — it generates code bloat and contradicts existing patterns without understanding the bigger picture.
  • The core limitation: AI assistants like Claude are trained on clean, well-documented code examples, but real-world systems at established companies mix multiple coding styles, old design patterns, and undocumented decisions accumulated over decades. When the AI sees only a fragment of code in isolation, it makes suggestions that conflict with the system's hidden rules.
  • For developers working in regulated industries (healthcare, finance, manufacturing), this means AI cannot yet be a solo tool — a human expert must review every suggestion to catch errors that could break critical systems. This flips the productivity promise: instead of AI reducing work, it currently adds review overhead unless the developer already knows the entire codebase by heart.

関連記事

AIニュースを毎日お届け

200以上のソースから厳選したAIニュースを毎日無料でお届けします。

無料で始める