Mathieu Comandon Explains His Use of AI in Lutris Development

https://lemmy.world/post/44771947

Mathieu Comandon Explains His Use of AI in Lutris Development - Lemmy.World

Lemmy

I love Lutris, but man he really screwed himself with his immature approach to this.

I was also suspicious that those Claude co-authorship would raise some issues in the open source community

So he knew it would cause a fuss when it came to light he was using AI, so what does he do?

I configured Claude code to skip the co-authorship line in git commits.

Rather than be fully transparent upfront he hides it. And his response when called out on it was doubly childish.

A lot of people didn’t like how I initially worded my response, something like “good luck figuring out what committed by me or by Claude now that the co-authorship is gone”.

His justifications for the use of it are irrelevant.

I still considered the Claude generated code as something I could have written, just slower.

I also like using Claude to commit code I’ve written myself because it just writes good commit messages…

And his reasoning for why he thinks people are upset at his use of AI shows he doesn’t understand the issue.

I think a lot of the critics think that AI generated code should be flawless.

This doesn’t invalidate the technology as a whole…

Ignoring the many issues with AI that do invalidate it one of which is its inherently anti-FOSS which I guess he doesn’t seem to mind except under the terms that he might be sued for copying someone’s code.

Also, there is enough open source code available that I would hope Anthropic doesn’t feel the need to train their models on potentially litigious code base.

Lmao. Sure.

In is original comments on Github he shifts the blame to overall capitalism but doesn’t see how continuing to pay into AI and further normalize its use as problematic.

So he doesn’t seem to get it I guess.

The rest of the interview is mostly just pro-AI.

Why open source may not survive the rise of generative AI

Generative AI may be eroding the foundation of open source software. Provenance, licensing, and reciprocity are breaking down.

ZDNET

Humans love to be lazy when they can be, and AI is a tempting fruit on the tree. People get sold on this vague promise of a shortcut that will allow you to prioritize other things, while whichever plagarism machine arbitrarily inserts whatever code does the job.

His response is a good example of where a developer of a large project who probably doesn’t get enough help, or just decided that lazy is better than actually doing hard work, has shut off his brain and is relying on a computer that’s incapable of coding standards that benefit the project.

As a senior FOSS dev of nearly 25 years, AI is going to poison our industry with these types of devs enabling it’s possible demise.

its inherently anti-FOSS

My dream is that we get some court to rule that code created by AI is specifically created by a machine, not the prompter, so it’s in the public domain. I seriously doubt we’ll see that, but I can hope.

(This is not a rebuttal, just discussion. I am not saying people should be pro-AI.)

Why open source may not survive the rise of generative AI

Generative AI may be eroding the foundation of open source software. Provenance, licensing, and reciprocity are breaking down.

ZDNET

I was ready to agree with a pragmatic use of LLM’s until he mentioned that he is working in an agentic style and is scared to touch the code manually because it will confuse Claude. 🤦🏻‍♂️

I’m all for using the AI as a pair programmer but unleashing it on a gigantic Python codebase is a comically foolish move, IMO. Python is not safe for this method of working. In fact, I’d call anything short of Unison (which hashes the entire AST) or something tailor-made for LLM’s like aver a fool’s errand. In my experience, the LLM needs a strict set of guardrails (like a static type system or similar) to get feedback from lest it turn things to spaghetti.

GitHub - jasisz/aver: Aver is a programming language for auditable AI-written code: verify in source, deploy with Rust, prove with Lean/Dafny

Aver is a programming language for auditable AI-written code: verify in source, deploy with Rust, prove with Lean/Dafny - jasisz/aver

GitHub
IMO, Lutris was already an overrated pile of junk before genAI became popular. It tracks that they’re vibecoding it now.
yeah that was fucking wild. although I am anti-ai, I do make a distinction between vibeshitting and using the thing as a boilerplate helper; what he said there should make anyone squeamish about letting this thing loose on your home folder.

because it will confuse Claude. 🤦🏻‍♂️

You’re kidding? Or did he really wrote that?

Do you still review and rewrite the code that Claude produces before it lands in Lutris?

Mathieu: I do review the code but I usually don’t rewrite something Claude is actively working on. If you don’t tell Claude about any changes you’ve made, there’s a big chance it will revert those and switch back to its original implementation! Instead, I will prompt Claude for the fixes I want made until I’m happy with the result.

I read it as “while Claude is actively working on this particular file, I do not touch that file”.

Yeah and Claude is actively working on all of the files in the repo so he, at best, would have to work on a separate branch in a separate directory (if he were working on it). I’m betting that that’s less common for him as time goes on.

How often do you reach for a calculator?

Wait, that is completely different from what you said earlier.

“if you don’t tell Claude, it will override your change”

Agentic tools do this today.

imagine this, if you set a goal for a robot to build a 9th storeys building, later you interject mid way to build up to 15 storeys. When the robot resumes its task, of course the robot is going to destroy the higher floors to meet its original goal of building to 9th storeys.

what else should we expect here?

the right to do before hitting the resume button is to “communicate” this new intention again.

it is not about being afraid of things

Regardless of AI, Heroic is a much better alternative unless you need some niche integration that Lutris provides. The Lutris developer is super stubborn about the most strange stuff, like absolutely refusing to upgrade the runtime of the flatpak because of some obscure controller regression. He’d rather see poor performance with super old mesa libraries and poor security for the majority.

But yeah, the heavy AI usage just makes me even more hesitant to use or recommend Lutris.

A lot of people didn’t like how I initially worded my response, something like “good luck figuring out what committed by me or by Claude now that the co-authorship is gone”. But really, that’s the point. To this day, I haven’t had a report pointing to a specific piece of code saying “this is AI generated nonsense hallucination”, all the concerns have been about the broader use of AI. It doesn’t mean the code is perfect, some bugs were found. But those bugs were pretty much on the same level as some found in human submitted patches.

Seems reasonable to not want to give extra ammunition to people who just want to harass the project about any use of AI as opposed to contributing anything constructive.

Oh well, thank god I never got down to actually use it
I like this trend of trying to actually talk to people about things they are working on instead of making assumptions in a vacuum. We could use a little more of that.
The dude really looks unsure about AI ngl.

Try Heroic (though afaik it doesn’t support steam yet)

But other than that it’s really an upgrade over lutris

Maybe, but since its a Eletron app I think imma using the Steam Client.
If anything goes wrong, I will use Heroic.