I 100% understand and support an anti-AI coding stance, but I'm seeing more and more people assert that everyone hates it and it never works. Unlike gen-art, unlike generated legal opinions, generated code is actually starting to produce good results, and more and more of my colleagues are using it, and as I review the code they produce, I can't just dismiss it as slop.

I'm not asking anyone to change their opinion or abandon the fight against AI. I'm just warning that asserting that "everyone hates it and it doesn't work" is ... increasingly incorrect. Effective arguments need to speak to the reality of the situation.

@huxley I've been using self-hosted LLMs lately as idea and organization tools - something that can pull together RAG data from different sources and synthesize something "new". I can hand Qwen 3.5 a folder of text files from a story I'm drafting and have a full RP session in that world with its characters, locations, and events. I refuse to use its output directly in my writing, but it's wild how good it is for something running on consumer hardware.

That said, I still feel dirty using it. I'm trying to use fully open models like Apertus, never use commercial LLM/AI services, and self-host everything, but I'm still engaging with a shitty destructive industry. I'm crossing my fingers that the hype cycle dies back enough for major companies to stop throwing money at it, but not so much that open source/self-hosted models stop evolving