Ughhhhh someone submitted an app to appcenter that is not only obviously vibe coded because Claud is on the repo but also because there’s a lot of really big mistakes.
Honestly I don’t like this at all. If you want to learn to write apps please ask for mentorship and join our community there’s lot of people willing to help. But I can’t help but feel like this is a waste of my time because you’re not learning anything this way. It feels gross
I also feel like we need something in appstream so that people in the App Store can know an app was made with an LLM and avoid it
Feel free to throw your acks here: https://github.com/ximion/appstream/issues/744
Tag for applications that have used LLMs in development · Issue #744 · ximion/appstream

The same way we have the ability to make choices about licenses that we're okay accepting, I think we need a way to mark apps which have been developed using LLMs like Claude Code so that folks can...

GitHub

We've also now started a discussion for the appcenter submission process if we'll allow submissions to our flatpak remote that are known to contain LLM-generated code

https://github.com/elementary/appcenter-reviews/discussions/647

@danirabbit I'd bet some of the discussion will drift toward "it doesn't matter if it's LLM-generated or not, it matters if it's quality code".

I disagree with that, though. I mean - by all means, reject crappy apps if they're crappy apps, but IMO there are other reasons to reject LLM-heavy projects.

One - if it's vibe-coded/LLM-heavy crap, I am not confident that the maintainer really knows what's in their own app. That has long-term implications.

Two - if it's LLM-heavy, what happens when the price of tokens shoots up in a few years / the maintainer's favorite plagiarism machine shuts down?

Three - why crowd out human-written apps / code with slop? Let people who want LLM crud distribute it themselves.

Sorry that you have to spend some of your limited time on Earth wrestling with all that.

@jzb @danirabbit
I largely agree with point one, just with a caveat being that someone can be competent with AI tools can also understand good code review practices even if it's not going to be obvious to an non developer.

On point 2 if "developers" who rely on AI can no longer maintain their code due to enshitfication, the end result is no different then legit developers that deprecate their code in other ways.

As for point 3, slop is slop. It's better to aim policies to reduce slop vs a blanket ban on AI.

I am not holding a firm view on AI use as it's too early to access it's long term impact. I'm only commenting because it is a pressing issue and it's easy to think emotionally then logically on this subject so counter arguments are a necessity.

@vinnyboiler @danirabbit

"On point 2 if "developers" who rely on AI can no longer maintain their code due to enshitfication, the end result is no different then legit developers that deprecate their code in other ways."

Not exactly. Right now there's a constant stream of new maintainers coming in and old maintainers going out / burning out. We've had that for a long time.

If I'm right, we're going to see a bubble burst where the tools stop effectively being available either due to pricing or simply disappearing. That's going to be kind of a meteor strike if we have a slew of maintainers depending on those tools - which is different than the usual rates of attrition.

"As for point 3, slop is slop. It's better to aim policies to reduce slop vs a blanket ban on AI."

So that's precisely the argument that I said people would make to begin with. And I agree that policies should allow blocking lousy code no matter how it is generated -- but there are additional concerns around AI/LLM stuff and its rapid expanse that need tending to.

As for the assertion that we need people commenting because this is an emotional topic, etc. No, we really don't. There's already hype and pressure to adopt these tools far, far, far beyond what we have seen before and a lot of the motivations are not good. Seriously - I grew up in the 70s and 80s and the peer pressure to use AI is worse than the mythical peer pressure to use drugs you'd see in all the "just say no" PSAs. I never had anybody try to persuade me to use drugs the way that AI-pushers are out here trying to shove AI slop into everything.

As they say, the devil does not need advocates. It's OK to sit it out. If AI is all that and a bag of chips, it will be adopted. There are reasons that the pro-LLM crowd and businesses are trying to overwhelm everybody and make it seem urgent to adopt this stuff. None of them good.

@jzb @danirabbit The biggest critique I can say then is what will happen when AI becomes good enough to show no visible signs of obvious LLM generated code then what?

Will newer developers be heavily scrutinized if Codex or Claude is in their repo? What if they claim only light usage? What gets tagged, the project or the developer? What if that same developer tries to publish a new app and claim no AI usage? Will more experienced developers get held to the same standard and scrutiny? How do you deal with bad actors who feign legit code?

Right now there is enough inexperience for code to show obvious signs of AI use, but when that becomes harder to tell it just becomes an avenue for gatekeeping.

@vinnyboiler personally I’m extremely happy to gatekeep usage of the orphan crushing machine. I don’t care if it’s really good at crushing orphans into source code. It’s the orphan crushing part that I hate most

@jzb