Lutris now being built with Claude AI, developer decides to hide it after backlash

https://pawb.social/post/41039903

Lutris now being built with Claude AI, developer decides to hide it after backlash - Pawb.Social

> A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied: > > It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression. > > > > There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans. > > > > I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army. > > > > Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

AI is actively destroying the environment and harming people. Data centers have been caught using methane burner generators (which are banned for use by the EPA) which significantly increase health risk to residents that live nearby (cancer and asthma rates already significantly increased). Then you have the ridiculous effects it is having on computer hardware markets, energy and water infrastructure and prices.

Then after all of that, the AI themselves are hallucinating somewhere in the neighborhood of 25% of the time, and multiple studies have found that people that use them regularly are losing their own skills.

I can’t figure out why people would choose to use them. I can’t figure out why programming is the one place where people that might have otherwise been considered experts in the field are excited to use them. Writers, artists, lawyers, doctors, basically every other professional field that AI companies have suggested these would be good for, they get trashed by experts in the fields for making garbage. I have a hard time believing the only thing AI can do well is write code when it sucks so badly at everything else it does. Does development suck this much? Do developers have so little idea what they are doing that this seems like a good idea?

Thank you. Another issue that sort of overlaps with the hallucination problem is the fact that it is basically is referring to snapshot in time. Based on my past attempts, no amount of searching the web will improve results because it has no idea to account for future outcomes like actual programmers can. Meaning, it isn’t very flexible and can’t adopt to new, breaking or quality of life changes.

Programming is a hobby for me and my preferred language is C#. I work on the bleeding edge for fun and so I can benefit from .NET’s recent quality of life changes. Naturally, I’m Microsoft’s target audience. And yet for the reasons stated above, these chatbots can’t work for me in the long run.

For the reasons you are stating the snapshot is actually a boon. More than I’d like to admit I’ve had to write something that has been done so many times before with some slight structural differences. And of course there isn’t a library flexible enough nor enough time to write that library. Instead of just busywork mindlessly writing something that should just exist already. You can just slop it out quickly then spend the time it would have taken to write, to refine it into something maintainable with all the new changes that are actually interesting and useful improvements. I see it as raising the bar of starting point.

That said, I just license my own stuff as MIT because I want to raise the bar for everyone, though I know it’s likely the AI companies haven’t respected the wishes of those who don’t do/want that.