@tante

None of these are true if you run your own LLMs on your own hardware, using FLOSS models.

But the #MastodonHOA has deemed all AI to be abhorrent as a blanket decision.

And frankly, if you exist in a capitalist society, and you're not an owner, there is 100% chance you are exploited. The capitalist system requires it.

@crankylinuxuser FLOSS Models (which are only freeware) fulfill most of those boxes. Trained on stolen data, massaged by people in global majority countries, trained in environmentally harmful data centers, outsourcing skills to the freeware product a company dumped on me, using a tool that is imbued and trained for how big tech wants to see the world, and effort could have gone to something meaningful. So yeah nope.

@tante

"Trained on stolen data". Its at best a copyright violation. And I view things like Anna's Archive and Libgen to be internationally renowned Public Libraries.

"Massaged by people in global majority countries" - yes, people work in capitalism. And guess what... You're exploited.

"Trained in environmentally harmful data centers". This assumes that training is always needed, and its not. You can train once, and run X times. Again, you're stretching to make local LLM look horrible.

And really, the rest of these are poor excuses. I won't use poop smear(anthropic), or OpenAI, or other SaaS token companies. I run local, and does not have those things you claim.

Except for the copyright issue. But again, I dont have that much respect for current US copyright.

@crankylinuxuser @tante

Its at best a copyright violation

This may be true for published and public data... but that's not the only data that goes into these things. Any data that comes from breaches, users private cameras, and anything else stored with an expectation of privacy is much worse than a copyright violation.

@Epic_Null @crankylinuxuser @tante

Data wants to be free. This argument simply doesn't work for those of us that have always been open data, anti copyright.

@komali_2 @crankylinuxuser @tante Every message between you and your doctor or you and your loved ones is data.

@Epic_Null @crankylinuxuser @tante

yup, so you better e2e encrypt that sort of thing

I don't care about LLMs being trained on things I want everyone to have access to because in order for everyone to have access to those things, they have to be available in a way that LLMs have access.

I'd prefer the frontier LLMs companies collapse into a black hole of capitalism but that's just because I hate corpos, not LLMs.

@komali_2 @crankylinuxuser @tante I will admit that I reserve the right to be interested in AI once the bubble bursts and it's no longer being shoved into literally everything and forcefed to everyone.

Until then though, I am a hard out.

@Epic_Null @crankylinuxuser @tante I strongly recommend trying the PRC models then since they're built from tech on tech violence lol. They're distilled from the frontier models. Using them represents a capitalistic harm to openai etc
@komali_2 @crankylinuxuser @tante Until the bubble bursts and it's no longer being shoved into literally everything and forcefed to everyone, I will not be taking interest in AI from any model.