What I'm hearing, I think, is roughly this: there are several reasons AI-assisted software is bad (provenance, skill atrophy, politics of AI companies, environmental impact listed, but there are more).
These are severe enough that AI-assisted software is materially different than other kinds of undesirable software, like proprietary software. One doesn't want such software to exist at all, on or off the Fediverse.
Is that about right?
One thing we do on open networks is have kind of a live-and-let-live attitude towards software we don't like. Like, one might not like proprietary software, but one doesn't keep people using it from being on the Internet or on the Web.
In this case, I think you're saying, there's a combination of two factors. First, the Fediverse is not that kind of network -- we have shared values that we want to enforce. And second, AI-assisted software is too problematic to allow.
@evan @mcc @arthfach There's a lot in this thread to unpack, but the very first thing I want to address is your "live or let live" point.
That's very much a "take my ball and go home" kind of argument — the very reason things like open source and the fediverse exist is because enough people believe that there is a moral or ethical argument for them, not only a practical one. We rightly tend to reject the idea that proprietary software and open-source software are morally equivalent.
@evan @mcc @arthfach So, presuming that having morals, acting on them, and expressing those actions through software engineering is something worth doing — a point I suspect you'd likely agree with, given your other actions in the world, the question then becomes one of what the moral effects of using AI at all are, and how those effects might be mitigated or exacerbated in the context of open source software and federated networks.
That analysis doesn't look good for AI, to put it mildly.
@evan @mcc @arthfach So. The ethical dimension to AI *in general*, not specific to the fediverse, OSS, or social networking:
• It's predominantly developed by and profits fascists.
• It's founded on eugenicist thought.
• The environmental cost is untenable.
• Large models cannot exist without compromising consent and labor rights.
• AI is used to attack labor rights further (think automated scab).
• The risk of mental health effects is poorly understood so far.
(con'd)
• Artists, software engineers, etc. can so no to unethical projects; AI cannot (see the Trump admin using AI to generate white supremacist images, or Claude being used in the "kill chain").
To OSS *in particular*:
• AI introduces untenable dependence on proprietary services.
• AI code cannot be adequately reviewed for defects (see @glyph's excellent post on the subject).
• AI introduces unknown legal risk; it is probably mild, but IANAL and don't know how to assert that.
@evan @mcc @arthfach All of the above are some flavor of "AI encloses the common good for the benefit of the worst people on the planet, and imposes untenable externalities along the way."
With that in mind, for the fediverse and social networking in particular, the fediverse is a particularly vulnerable common good that bad actors have already tried to disrupt — AI can and should be viewed as one more such attack, and I do not see any reason I or anyone else should help the attackers out.