Is it OK to run software written with the help of AI/LLMs on the Fediverse?

#EvanPoll #poll

Yes
20.7%
Yes, but...
19.1%
No, but...
9.4%
No
50.8%
Poll ended at .
@evan Obviously absolutely not. And you know that.
@evan For the purposes of this argument, "AI" is defined as "generative and agentic AI" specifically.)

* AI is trained on stolen creative works, including FOSS projects, which contributes to the discouragement of contributors who still retain actual knowledge. Instead, it encourages vibe coders and people who don't care about quality, security, and reliability to take over. This then causes an ever-increasing strain on maintainers until they exit the project, causing a downward spiral.

* AI encourages an atrophy in skills as more and more work is outsourced to the AI, causing the individual users to become less and less capable of policing its outputs. This harms the entire software engineering ecosystem as lower and lower quality code is put out, the AI is retrained on it, and (because of how AI works) the next generation of code is even lower quality - which is then ingested, and so on, and so forth. You can already see this happening when AI outputs code using libraries and APIs that are obsolete or just not valid at all.

* The use of AI, including open source models, contributes to the coffers of companies that contribute resources (models, money, etc.) to institutions built to suppress human rights and freedoms. Even with open source models, the use of them normalizes the use of AI generally, which causes paid use to increase, which gives more money to human rights abusers.

* The environmental impact of AI is strongly negative. FOSS, of all places, should not contribute to the construction of power-and-water-gulping datacenters. These datacenters cause a significant amount of local environmental destruction, as well as harm the communities that they are built in (through things such as brownouts, pulling too much groundwater so residents can no longer use any or are HIGHLY restricted, and etc.). This does not even account for the resource costs of each new training set, which takes a ton of compute for each new revision - not just as it's processed, but the compute required to scrape every publicly available piece of information - driving up the CPU and network use of every single server that's constantly being hit by AI.

I could list out more but I'm sure you know the arguments. Essentially, people who want to shove AI projects on the world are going, "I don't care about social or environmental responsibility, nor do I care about my fellow humans; I just want to do what I want and damn everyone else."

And that's not something that the fediverse - or anyone - should support.

(Hey @xgranade and @davidgerard and @mcc if y'all have anything to add to this, please do. I tried to make it as clear as I could but I know sometimes I don't speak well about topics like this.)

@arthfach

What I'm hearing, I think, is roughly this: there are several reasons AI-assisted software is bad (provenance, skill atrophy, politics of AI companies, environmental impact listed, but there are more).

These are severe enough that AI-assisted software is materially different than other kinds of undesirable software, like proprietary software. One doesn't want such software to exist at all, on or off the Fediverse.

Is that about right?

@xgranade @mcc

@arthfach @xgranade @mcc

One thing we do on open networks is have kind of a live-and-let-live attitude towards software we don't like. Like, one might not like proprietary software, but one doesn't keep people using it from being on the Internet or on the Web.

In this case, I think you're saying, there's a combination of two factors. First, the Fediverse is not that kind of network -- we have shared values that we want to enforce. And second, AI-assisted software is too problematic to allow.

@evan @arthfach @xgranade Most software is not based on stealing the commons and polluting the earth. Most software can be coexisted with because it does not destroy things around it by existing (scraper DDOS, people withholding things from publication because licenses can no longer be used to limit commerical exploitation, elimination of non-model-based software, mandatory infusion of "AI" into major platforms). "AI" does in fact knock things that are not "AI" off the web.

@evan @mcc @arthfach There's a lot in this thread to unpack, but the very first thing I want to address is your "live or let live" point.

That's very much a "take my ball and go home" kind of argument — the very reason things like open source and the fediverse exist is because enough people believe that there is a moral or ethical argument for them, not only a practical one. We rightly tend to reject the idea that proprietary software and open-source software are morally equivalent.

@evan @mcc @arthfach So, presuming that having morals, acting on them, and expressing those actions through software engineering is something worth doing — a point I suspect you'd likely agree with, given your other actions in the world, the question then becomes one of what the moral effects of using AI at all are, and how those effects might be mitigated or exacerbated in the context of open source software and federated networks.

That analysis doesn't look good for AI, to put it mildly.

@evan @mcc @arthfach So. The ethical dimension to AI *in general*, not specific to the fediverse, OSS, or social networking:

• It's predominantly developed by and profits fascists.
• It's founded on eugenicist thought.
• The environmental cost is untenable.
• Large models cannot exist without compromising consent and labor rights.
• AI is used to attack labor rights further (think automated scab).
• The risk of mental health effects is poorly understood so far.

(con'd)

@evan @mcc @arthfach

• Artists, software engineers, etc. can so no to unethical projects; AI cannot (see the Trump admin using AI to generate white supremacist images, or Claude being used in the "kill chain").

To OSS *in particular*:

• AI introduces untenable dependence on proprietary services.
• AI code cannot be adequately reviewed for defects (see @glyph's excellent post on the subject).
• AI introduces unknown legal risk; it is probably mild, but IANAL and don't know how to assert that.

@evan @mcc @arthfach All of the above are some flavor of "AI encloses the common good for the benefit of the worst people on the planet, and imposes untenable externalities along the way."

With that in mind, for the fediverse and social networking in particular, the fediverse is a particularly vulnerable common good that bad actors have already tried to disrupt — AI can and should be viewed as one more such attack, and I do not see any reason I or anyone else should help the attackers out.