https://smallsheds.garden/blog/2026/on-the-acceptance-of-genai/
None of these are true if you run your own LLMs on your own hardware, using FLOSS models.
But the #MastodonHOA has deemed all AI to be abhorrent as a blanket decision.
And frankly, if you exist in a capitalist society, and you're not an owner, there is 100% chance you are exploited. The capitalist system requires it.
"Trained on stolen data". Its at best a copyright violation. And I view things like Anna's Archive and Libgen to be internationally renowned Public Libraries.
"Massaged by people in global majority countries" - yes, people work in capitalism. And guess what... You're exploited.
"Trained in environmentally harmful data centers". This assumes that training is always needed, and its not. You can train once, and run X times. Again, you're stretching to make local LLM look horrible.
And really, the rest of these are poor excuses. I won't use poop smear(anthropic), or OpenAI, or other SaaS token companies. I run local, and does not have those things you claim.
Except for the copyright issue. But again, I dont have that much respect for current US copyright.
Its at best a copyright violation
This may be true for published and public data... but that's not the only data that goes into these things. Any data that comes from breaches, users private cameras, and anything else stored with an expectation of privacy is much worse than a copyright violation.
@Epic_Null @crankylinuxuser @tante
Data wants to be free. This argument simply doesn't work for those of us that have always been open data, anti copyright.
@Epic_Null @crankylinuxuser @tante
yup, so you better e2e encrypt that sort of thing
I don't care about LLMs being trained on things I want everyone to have access to because in order for everyone to have access to those things, they have to be available in a way that LLMs have access.
I'd prefer the frontier LLMs companies collapse into a black hole of capitalism but that's just because I hate corpos, not LLMs.
@komali_2 @crankylinuxuser @tante I will admit that I reserve the right to be interested in AI once the bubble bursts and it's no longer being shoved into literally everything and forcefed to everyone.
Until then though, I am a hard out.