This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
Indeed. But I have tried this skill and can confirm that the thinking phase is not impacted. At least in my few attempts it applied the "caveman talk" only to the output, after the initial response was formulated in the thinking process. I used opencode.
You are right, of course, that as such it does not reduce the token usage really. If anything it consumes more tokens because it has to apply the skill on top of the initial result. I do appreciate the conciseness of the output, though :)
Well, there is OpenCode [1] as an alternative, among many others. I have found OpenCode being the closest to Claude Code experience, and I find it quite good. Having said that I still prefer Claude Code for the moment.