Whoever thinks it's a good idea getting software developers (or anyone else, really) to feed natural language queries into a non-deterministic model to generate software is quite insane.
@whyrl The big problem I have with things like Claude Code (aside from the environmental and ethical ones) is that it usually works - which is very terrifying to anyone who is experienced with programming or just cares about correctness. Tech that usually works is tech that occasionally doesn't. If you had a phone that usually worked, you'd throw it out and get one that always works at the earliest available opportunity. LLM-generated code is seductive because it really does work enough to be functional, but secure, maintainable code is in the details. A huge portion of CVEs are from tiny mistakes that escape notice. How many of those tiny mistakes does Claude output? These risks can be mitigated by thorough and careful human review, but the nature of LLMs encourages moving very fast, leaving correctness in the dust. I have a feeling that there's going to be a lot of job security for white hat hackers in the coming years.