Just read a bit of a thread about using AI for code generation, and it eang true for me: when programming, you aren’t just implementing a spec (ok, sure some people are); you’re testing the theory that the spec describes and, through that process, identifying falsehoods, corner cases, or omissions. If you leave it all to the LLM, none of that happens - so what gets implemented is, at best, exactly the spec. Then turn over testing to an LLM as well, and fewer chances to test and challenge the spec. Eventually, with specs written by LLMs as well, what you actually get is potentially completely unrelated to the problem you want to solve, but there’s no way you can know until it’s much too late.