Today, I'm tasked with reviewing #AISlop code which our database administrator generated.

I don't know how much time they spent on "prompting" these results into existence, but I already found a ton of issues just by glancing over it and it will probably take me the whole day if not longer to list every issue with this code.

Can anybody please explain to me how this is a "good thing"?

People with LIMITED #software #development expertise spend little time on generating loads of questionable quality source code at the expense of the people WITH development expertise who are struggling to keep up consistent quality standards in their code base.

Is this fair? Is this a smart thing to do?

I'm not seeing it. Am I blind? Am I old-fashioned?

Is "embracing mediocrity" just the way how things are done in 2026 and I should just give up and go live in a cave somewhere?

@RandomHost
No, it isn't. It's also the completely wrong approach. These tools must be operated by experienced personnel who know what they're doing. The LLMs are replacing the junior developers here. A sensor developer can complete an entire project, including backend, frontend, etc., in just three weeks if they program the LLM correctly. How do I know this? I'm experiencing it firsthand because I live with a senior consultant who has made 25 junior developer jobs
redundant. He says that at first, it felt like he was guiding a junior developer. He contributed his experience, provided guidance, and wrote a quality management system that the AI followed. Now he tells the AI what to implement, and it generates, tests, and validates the project on its own. It seemed to me like Captain Picard giving instructions to the computer. Very futuristic and impressive. @RandomHost