The AI hype-cyclone is bad, but so is the anti-AI witch hunt. Commits co-authored by Claude do not mean that a project has "abandoned engineering as a serious endeavor"

Would we say that accepting contributions from new developers means we've "abandoned engineering as a serious endeavor"? No.

Claude can write wrong code. New contributors can write wrong code. What matters is what you do with that code after it's been written.

@nedbat If this is in response to my toot about the CPython repo on GitHub, may I take the opportunity to respond?

If not, I apologize for interrupting.

@xgranade You may always respond. The quote was from a reply to you.

@nedbat Appreciated, thank you. And yes, I realize that was a reply; I read that as coming from a place of frustration (a place I share, for what it's worth) rather than a literal statement.

That said, and to the direct point that you made, I don't think that calling out the use of AI products is a witch-hunt. AI is an effort to undermine labor and enclose common infrastructure, and I believe it is a fair thing to believe that there's a proactive duty to resist the adoption of AI products.

@nedbat For clarity, where I agree with the quote that you included is that I do not think that the use of AI is consistent with good engineering practice — to the extent that a project adopts AI, that is necessarily a compromise of engineering principles.

In the case of the the CPython interpreter, that seems to have been a fairly small number of well-isolated commit so far, but absent any mechanism to reject AI-generated code, I don't know how to uphold Python's engineering standards.

@xgranade The important thing is to reject bad code. There are mechanisms for that. I've assumed that the concern was the possibility that AI code is bad code. New contributors can also contribute bad code. Yet welcoming contribution policies are not considered incompatible with "serious engineering."

@nedbat I agree that's *an* important thing, but I don't agree that it's the only important thing. Ethics matter, for instance, and on that basis alone, we have a strong moral imperative to reject AI products.

Setting that aside, though, even from the limited perspective of rejecting bad code, we similarly have a strong imperative to reject AI products.

I think the comparison to new contributors is somewhat misleading in trying to get at why.

@nedbat New contributors still have some understanding of the code they write, even if imperfect. They can be tutored and mentored into offering more valuable and useful contributions. They can be taught how a codebase works and grow to become maintainers.

AI products do none of that. LLMs do not "understand" anything, and cannot by construction do so. There is no process by which bad AI-extruded code can become good AI-extruded code, and so it's on us to reject it.

@nedbat I set aside ethics earlier, but I do think the ethical dimension is important here — to that end, and at the risk of oversimplifying, there are four main reasons to reject AI products on an ethical basis:

• They are founded on eugenicist philosophies. This is not hyperbole, but is a well-established fact.
• Financially, they largely benefit fascist movements.
• The environmental cost is untenable.
• AI products work by devaluing and exploiting labor.

@nedbat I think "serious engineering" demands a few things of us. We must use best available theory to understand our designs, we must act responsibly with respect to our professional communities, and we must act ethically with respect to broader society.

All of those duties, all of what taking software engineering as a literal engineering discipline demands, they are all completely incompatible with adoption of AI products.

@nedbat I recognize this is getting long, and I apologize for that — you raised something that has a lot of moving pieces, and it would be dishonest of me to respond in incomplete detail.

The last point I'll make, then, is with respect to "welcoming." We already have strong evidence that we cannot *and should not* be welcoming to all potential contributors, as per the Paradox of Tolerance. That's why we have codes of conduct.

AI products should be rejected following the same logic.

@nedbat I do not believe you can both be welcoming of laborers and AI products, given that AI products stand directly opposed to labor interests. I do not believe that you can both be welcoming of trans people and AI products, given how AI vendors act in global politics. I do not believe that you can be welcoming of younger people who disproportionately bear the costs of climate change and AI products, given their environmental impact.