Back to articles

Developer finds smaller 27B parameter model outperforms GPT-4 class 405B model at creating immersive tabletop RPG narratives

r/LocalLLaMA · April 19, 2026

AI Summary

  • Creator built an open-source, model-agnostic tabletop GM tool designed to run D&D sessions on various LLMs with tool support capabilities
  • Tested 8 different language models using a custom 'narrative quality probe' to evaluate atmospheric storytelling rather than just tool-call compliance
  • A 27-billion parameter model demonstrated superior narrative quality compared to a 405-billion parameter model, suggesting size doesn't guarantee better creative writing
  • The tool successfully introduced the creator's family to fantasy RPGs, proving practical value despite criticism from D&D purists about AI integration

Related Articles

Stay ahead with AI news

Get curated AI news from 200+ sources delivered daily to your inbox. Free to use.

Get Started Free