Back to articles

Developer reports Qwen3.6-35B model running locally on MacBook Pro M5 Max matches Claude's performance with 64K context window

r/LocalLLaMA · April 19, 2026

Developer reports Qwen3.6-35B model running locally on MacBook Pro M5 Max matches Claude's performance with 64K context window

AI Summary

  • User successfully deployed Qwen3.6-35B with 8-bit quantization on MacBook Pro M5 Max (128GB RAM) via LM Studio and OpenCode
  • Model demonstrates strong performance on complex coding tasks including multi-step debugging of Android serialization issues with multiple tool calls
  • Response speed is notably fast and handles long research contexts efficiently, making it suitable as a daily driver replacement for previous Kimi K2.5 setup
  • Local deployment eliminates privacy concerns about sending proprietary codebases to external AI service providers
  • Post compares favorably against other tested models including Gemma4s, Qwen3 Coder Next, and Nemotron variants

Related Articles

Stay ahead with AI news

Get curated AI news from 200+ sources delivered daily to your inbox. Free to use.

Get Started Free