This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
It hasn't always been true, it started with rapid development tools in the late 90's I believe.
And some people thought they were building "disposable" code, only to see their hacks being used for decades. I'm thinking about VB but also behemoth Excel files.
All the good practices about strong typing, typically in Scala or Rust, also work great for AI.
If you make sure the compiler catches most issues, AI will run it, see it doesn't build and fix what needs to be fixed.
So I agree that a lot of things that make code good, including comments and documentation, is beneficial for AI.
Still, talk about "good" code exist for a reason. When the code is really bad, you end up paying the price by having to spend too more and more time and develop new features, with greater risk to introduce bugs. I've seen that in companies in the past, where bad code meant less stability and more time to ship features that we needed to retain customers or get new ones.
Now whether this is still true with AI, or if vibe coding means bad code no longer have this long term stability and velocity cost because AI are better than humans at working with this bad code... We don't know yet.
I tried it on my mac, for coding, and I wasn't really impressed compared to Qwen.
I guess there are things it's better at?
There is no reliable way to detect AI writing. It probably trains on texts known to be AI, on texts known to be written by humans, then classify the text according to this training.
The problem is that it has a pretty high false positive rate. Maybe it thinks it's AI because there are absolutely no spelling mistakes. Or maybe you're French and you use latin-roots words in English that are considered "too smart" for the average writer.
And the problem is that people run those tools, see "80% chance to be written by AI", and instead of considering that 20% is high enough to consider you don't know, will assume it's definitely written by AI.
Have you used a state of the art tool (e.g. Claude Code) in the past 6 months? If you only tried free tools, or only tried 1 year ago last, you really need to check again.
AI tools can absolutely contribute usefully, I can't keep count of the times where an AI pointed to an edge case I didn't think about, then helped me write the fix and the test for the issue.
I'm not vibe coding, as I'm reviewing the code, but saying they can't be useful means you haven't taken the time to look at the state of them recently.