Lutris dev says he's cool with AI generated bugs because his code is already full of bugs
Lutris dev says he's cool with AI generated bugs because his code is already full of bugs
More specifically, what he’s saying is that in his experience so far the AI generated code is just as buggy as human generated code. Which seems perfectly reasonable to me. The point is that all code needs to be carefully reviewed.
The problem with vibe coding is that people who know nothing about coding get an AI to output code that they can’t even read, let alone debug. Then they throw it straight into production. Using AI to quickly output blocks of code for software that is being designed, assembled and reviewed by an experienced programmer, using proper review and testing processes, is a very different beast.
Obviously if you regard any use of LLMs as immoral then it doesn’t matter. But if that’s why people are unhappy about this then they need to say so. If their concern is actually with the results, not some broader immorality of the process, then the dev is absolutely right; they need to actually look at the results.
Sure, but the thing is that bad human code also looks plausible and correct if you’re not taking the time to carefully analyze it. Bad code can be something as small as a missed comma. It can be writing the correct statements, but putting them in the wrong order. It can be an incorrect indent. It can be 100% correct code that doesn’t work because your project is using an older or newer version of a library.
The problem, a lot of the time, with LLMs is that they introduce the necessity of a review step - the ubiquitous “Always double check the output” - that is so time consuming, or so thoroughly invalidates the need for the original output, that you might as well just skip the LLM and go straight to the double checking stage. If you’re asking an LLM for information about Brazilian visa policies, but you can’t trust that information unless you check against the Brazilian government website, then you should just check the website and not bother asking the LLM.
But with coding, the review stage is already baked in. All code, human or machine, requires careful review. And all bad code can look like good code if you don’t know what you’re looking for (and a lot of the time it looks like good code even if you do know what you’re looking for. That’s why a second set of eyes is so important). So as long as the LLM isn’t producing significantly more issues than a human coder would, there’s no real downside.
There are still dangers to be aware of, of course. But it’s a very different scenario from, say, dispensing medical advice.
There’s also the question of how LLMs are used in a project. There’s a big difference between firing up Claude and saying “Write a program that will make Windows games run on Linux” vs saying “Write a function that checks if an instance of BattleNet is already running.” And both the scope of your prompt and the completeness matter. If you are an experienced coder you will know what information you should supply to the LLM to correctly construct the output. If you don’t, it’ll just fill in the blanks.
People with more substantial coding knowledge have the ability to be more specific about both what they want and how they want it done, so they will get much more consistent results back. And, of course, they have the skills needed to identify bad results for themselves, as long as they are taking the time to do so.