This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
A jump that we will never be able to use since we're not part of the seemingly minimum 100 billion dollar company club as requirement to be allowed to use it.
I get the security aspect, but if we've hit that point any reasonably sophisticated model past this point will be able to do the damage they claim it can do. They might as well be telling us they're closing up shop for consumer models.
They should just say they'll never release a model of this caliber to the public at this point and say out loud we'll only get gimped versions.
Not my experience. GPT 5.4 walks all over Claude from what I've worked with and its Claude that is the one willing to just go do unnecessary stuff that was never asked for or implement the more hacky solutions to things without a care for maintainability/readability.
But I do not use extra high thinking unless its for code review. I sit at GPT 5.4 high 95% of the time.
Yes it is genuinely infuriating that this is the case for a company that for so long was praised for their superior UX.
This along with the tons of other paper cuts they've slacked on is tarring their brand.
I mean its a skill issue in the sense that Claude Code gives you the tools to 100% deterministically prevent this from ever happening without ever relying on the models unpredictability.
Just setup a hook that prevents any git commands you don't ever want it to run and you will never have this happen again.
Whenever I see stuff like this I just wonder if any of these people were ever engineers before AI, because the entire point of software engineering for decades was to make processes as deterministic and repeatable as possible.