0 Followers
0 Following
2 Posts
Nate Kohari • https://nate.io[email protected]

now: building ardent.ai

was: staff engineer @stripe (developer experience, applied ml), engineer @auth0, CTO @kevel, co-founder @agilezen (acq. 2010), author of ninject in a former life

[ my public key: https://keybase.io/nkohari; my proof: https://keybase.io/nkohari/sigs/D2wzLAGkEsKCwxKhK7peaDEpYTGNgvN13B4Teyd4ZXY ]
This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.

Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup
Because LLMs are designed as emulators of actual human reasoning, it wouldn't surprise me if we discover that the things that make software easy for humans to reason about also make it easier for LLMs to reason about.
You're comparing apples to oranges there. Qwen 3.5 is a much larger model at 397B parameters vs. Gemma's 31B. Gemma will be better at answering simple questions and doing basic automation, and codegen won't be it's strong suit.