Vim's lead maintainer has fully lost his goddamn mind

https://programming.dev/post/47279277

I spent literally all day yesterday working on this:

sciactive.com/human-contribution-policy/

I’ve started to add it to my projects. Eventually, it will be on all of my projects. I made it so that any project could adopt it, or modify it to their needs. It’s got a thorough and clear definition of what is banned, too, so it should help any argument over pull requests.

Hopefully more projects will outright ban AI generated code (and other AI generated material).

Human Contribution Policy – SciActive Inc

I like this approach, but how can it be enforced? Would you have to read every line and listen to a gut feeling?
No, it’s a prejudiced hot take that’s completely and utterly unenforceable which will be seen as some Luddite behavior in 10 years when everyone is using the tooling.
Tell us how you really feel.
I did. And you’re worried about clankers being able to comprehend as well as a human 🤣, good Lord the bar is low.
Urban Dictionary: tell me how you really feel

The expression "tell me how you really feel" is said in sarcasm and irony after someone has said an anger or hate-filled statement, drawing attention to the anger and hatred (and implicitly mocking it).

Urban Dictionary

Ok that’s really funny and I do agree with you, but I think you might be coming at this a little… unhinged. The issue with this is that it is unenforceable and honestly somewhat pointless. If AI tools are not up to scratch, then that will always be reflected in the quality of the code. Bad code is bad code, it doesn’t matter what made it. A lot of people seem to think AI is synonomous with bad code, and if that is the case, simply ban bad code.

The issue they are going to run into is twofold:

Firstly, what qualifies as “using AI”? Admittedly I haven’t actually read their licensing, but I’m just going to take a guess and say that it bans all forms of AI used anywhere in production. Almost every compiler I use these days has auto predict. It’s rarely useful, but if it does happen to guess the rest of the code I was already going to type, and I accept that, did I use AI to assist my coding? Back in the day before it was an llm the auto predict was actually decent, so not all of them use AI. How would you even know whether your is AI or not?

The second issue is an issue of foresight. When the AI tools do become up to scratch, that will be reflected in the quality of their code. Suddenly AI generated code is faster, more efficient, and easier to understand all simultaneously. Anyone using this license is effectively admitting that theirs is the inferior option.

It’s always hilarious to me when people ask whether something is AI slop. I dunno man, has your ability to detect whether something is good been reduced to AI slop? If it’s good, it’s good. If it’s not, it’s not. Either you like it or you don’t. Feels very similar to transphobes saying they can always tell. If that’s true, and AI really is always going to worse, you should never have to ask whether something is AI slop, you should just be able to tell. Otherwise it’s just slop, no ai necessary.

Firstly, what qualifies as “using AI”? Admittedly I haven’t actually read their licensing, but I’m just going to take a guess and say that it bans all forms of AI used anywhere in production. Almost every compiler I use these days has auto predict. It’s rarely useful, but if it does happen to guess the rest of the code I was already going to type, and I accept that, did I use AI to assist my coding? Back in the day before it was an llm the auto predict was actually decent, so not all of them use AI. How would you even know whether your is AI or not?

So two things. First, it’s a policy, not a license. Second, the definition of AI generated is very clear in the policy.

I don’t know why you would criticize it without reading it, but the main problems with AI generated code are legal, not quality, and they are also clearly laid out in the policy.