I’m sorry, how does that work, exactly?
The source code is right there. It doesn’t matter how it was produced. If the LLM assistant isn’t available to use to generate new code, then you can still edit it, fix it, etc.
Or are you talking about LLM-enabling features as part of the program? Summarization or something? I guess those features might have to change (though there are lots and lots of models available to use instead, so it’s not like anyone’s going to have a monopoly on those).
I think the logic is more : once you allow slopcode into your codebase, there is no turning back.
The code very quickly becomes impossible to maintain without the machine, because now nobody actually knows the logic how the overall design of the program works. Now you need your tokens to keep things running, otherwise the whole thing collapses.
Thanks, that makes sense. The issue then becomes standard software engineering practice: don’t approve a pull-request you don’t understand. (The ease of prompt-injection attacks already precludes giving the generative-AI tools the privilege of commiting to master.)
On the other hand, one thing the AI tools make easy is refactoring and prototyping — the old rule of thumb of “throw the first one away” or, I guess in more modern “agile” parlance, “iterate quickly”. Less ambitiously, you look at that pull request and respond to it with criticisms and the need for clarification (“do it better in the follwing ways” instead of “do it over”).
@Rycochet in case you're interested, a related genai bailout analysis but in a different context:
https://buttondown.com/maiht3k/archive/let-s-be-evil-anthropic-openai-and-the-department/
we should see the relationship between the [military and Big Tech] as a “bailout” of the AI industry’s unworkable business model. Companies are hoping they will be rescued with government contracts.