Qwen3.6-27B (dense model) was released two days ago, and I confirm it strongly beats all previous versions in coding.
It even beats the ~15x larger flagship model of the previous version (Qwen3.5-397B-A17B).
https://qwen.ai/blog?id=qwen3.6-27b
I am packaging open-source tools offline on my RTX PRO 6000 96GB, and I am impressed!
This confirms that advances in LLM models (becoming smaller AND better at the same time) is currently more impactful on AI performance than hardware advancements.

