記事一覧に戻る

Developer reports Qwen3.6-35B model running locally on MacBook Pro M5 Max matches Claude's performance with 64K context window

r/LocalLLaMA · 2026年4月19日

Developer reports Qwen3.6-35B model running locally on MacBook Pro M5 Max matches Claude's performance with 64K context window

AI要約

  • User successfully deployed Qwen3.6-35B with 8-bit quantization on MacBook Pro M5 Max (128GB RAM) via LM Studio and OpenCode
  • Model demonstrates strong performance on complex coding tasks including multi-step debugging of Android serialization issues with multiple tool calls
  • Response speed is notably fast and handles long research contexts efficiently, making it suitable as a daily driver replacement for previous Kimi K2.5 setup
  • Local deployment eliminates privacy concerns about sending proprietary codebases to external AI service providers
  • Post compares favorably against other tested models including Gemma4s, Qwen3 Coder Next, and Nemotron variants

関連記事

AIニュースを毎日お届け

200以上のソースから厳選したAIニュースを毎日無料でお届けします。

無料で始める