0 Followers
0 Following
3 Posts

This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup
Based on the current DeepSeek website I suspect it's not going to be great as their current model (V3.4? V4-mini?) often forgets or changes facts explicitly mentioned in the conversation which R1 never did. It's better than R1 at math or coding, but nearly unusable for deep conversation. I suspect they pushed MLA or linear attention too much, or quantize a lot more than before.
Yeah, Qwen3 coder for Claude Code and 3.5 for OpenClaw replaced my full-stack use of Opus 4.6 already; it's fine for basic web apps, k8s/docker infra setup, optimizing AI models etc. with only slightly higher error rate than Opus. Upcoming 3.6 together with Gemma4 might make it even better (still to test). OpenAI's memory spot market play might have been directed at local inference as well.

Can't you use Claude caveman mode?

https://github.com/JuliusBrussee/caveman

GitHub - JuliusBrussee/caveman: 🪨 why use many token when few token do trick — Claude Code skill that cuts 75% of tokens by talking like caveman

🪨 why use many token when few token do trick — Claude Code skill that cuts 75% of tokens by talking like caveman - JuliusBrussee/caveman

GitHub