@hummussapiens @FakeScrumStats
Unsupervised coding is the main problem; the higher the human involvement, the lower the structural error rate tends to be.
Humans are more prone to making careless mistakes, which are easier to spot. LLMs are more likely to make fundamental errors in their approach, which means that major security risks are more likely to occur.
As is always the case, blind or excessive trust in technology is the problem.
@hummussapiens @FakeScrumStats
Imagine writing an operating system like Windows using LLM, without supervision, of course. Fundamental security aspects would probably not be applied, some or most parts would be structured at a beginner's level, and so much more.
Depending on the goal (what do I want to do and for what purpose), you would employ people with different levels of experience. Human experience cannot replace technology, not to mention completely new problems with the technology.
@hummussapiens @FakeScrumStats
Those who know how to use the technology properly can save a lot of time, but it is not, as the companies' PR departments claim, a miracle worker that can simply replace humans.
The technology must be operated by humans, and humans must define the vision/objectives, as well as take on monitoring and other tasks.
@sam4000 @hummussapiens @FakeScrumStats
I really agree! Vibe coding is an very useful tool, that needs need some structure (and knowledge, of course).
There is a book Software Engineering for Vibe Coders: https://summonthejson.com/products/software-engineering-for-vibe-coders-ebook. I sincerely recommend it.
Management will like it.
"And you said we couldn't do double the amount of walls! Thanks to AI, we now know that we can!"