I 100% understand and support an anti-AI coding stance, but I'm seeing more and more people assert that everyone hates it and it never works. Unlike gen-art, unlike generated legal opinions, generated code is actually starting to produce good results, and more and more of my colleagues are using it, and as I review the code they produce, I can't just dismiss it as slop.

I'm not asking anyone to change their opinion or abandon the fight against AI. I'm just warning that asserting that "everyone hates it and it doesn't work" is ... increasingly incorrect. Effective arguments need to speak to the reality of the situation.

@huxley I also see more and more of it.
At work I try to push at least to use European Hosting, but I also have to admit, that you have a point and good conducted llm generated code might work.
@huxley If it really is getting "better" at generating functional code, then its legal & ethical issues are only getting worse.
@jackemled You've got it, those are the issues to concentrate on. Legal, ethical, environmental. But not baseline functionality. (This is in contrast to blockchain, which never developed an actual use outside of crime.)

@huxley Blockchain is decent at it's only actual application (keeping sequence of a ledger & preventing double spending of money), but it's simultaneously really bad at it, just look at "block reorganization attacks" & "selfish mining". It's really sad but also funny because that's literally the only thing it has any use at all in.

I'm sure procedural code generation would be way more reliable than random code generation like what LLMs do, & way more ethical & with much more concrete legal situations, but it would also probably be way more difficult to make. I guess that would basically be a natural language to programming language cross compiler though.

@huxley I've been using self-hosted LLMs lately as idea and organization tools - something that can pull together RAG data from different sources and synthesize something "new". I can hand Qwen 3.5 a folder of text files from a story I'm drafting and have a full RP session in that world with its characters, locations, and events. I refuse to use its output directly in my writing, but it's wild how good it is for something running on consumer hardware.

That said, I still feel dirty using it. I'm trying to use fully open models like Apertus, never use commercial LLM/AI services, and self-host everything, but I'm still engaging with a shitty destructive industry. I'm crossing my fingers that the hype cycle dies back enough for major companies to stop throwing money at it, but not so much that open source/self-hosted models stop evolving

@huxley
"Everyone hates it and it never works" are two statements that could be separated.
I do not care that it works, i hate the tech from the concentration camp maximalists the same way i hate Hugo Boss and Mercedes Benz
@wachoperro I agree they are separate questions but they are often stated together in people's posts. Many people still hate AI (with good reason!) but the circle of people who like it is growing far beyond the circle of the shittiest peope

@huxley
Yep, i think the marketing is working. People are only taking their personal experience with the "you are absolutely right" machine and forget about the african intelligence that makes AI work, or the giant subsidies that the industry is giving since day one just to become too big to fail.

People seem to still want iphones even with all the slave labor they need. Would they keep like them if they costed 10X and was their only way to communicate?

@huxley
Also would like to add, in a more tinfoily way but still /srs

If the shittiest people (the ones with the worst standards) like it, then the next people to like it would be the ones with _slightly_ better standards, which is a low bar to pass, so i personally don't think that is a good thing.

AI Reception got better, and now we have the "purity culture" thing thrown around everywhere to dismiss anti-AI sentiment, which tells me that the bar got up by a little bit just because some people are ok with _some_ of the problems with AI.

So "not caring the slave labor behind AI" can be an acceptable view, and "not caring about the climate effects", and "not caring about the labor effects", and "not caring about the energy consumption". If someone doesn't care about one of those individually, they can get in the "purity culture" camp and the ones who do not care about more than one (or most) of those things will have their back every day.

@huxley my mental sketch of coding agents is that they're a semirandom walk through "plausible sentences" space, which can work! especially if there's something like a testsuite that can "independently" evaluate the agent's result, which lets us automatically retry the agent until it "succeeds"

what makes me wary is when agents generate code with uncanny, unhuman errors that sneak past testsuites and code review, because the code is *plausible*, but subtly wrong

@gray17 100%. All the research shows if you let them go off on their own, code gets worse and worse. They need regular, careful oversight