@ilja @elena
No, I am in good faith talking about this. And I do think there's a LOT of nuance, and also a LOT of bad that can come of the backlash from LLMs.
The biggest argument against thinking machines is how capitalism works, and who it fails. AI is being pushed so it can "solve" wages. If the public owned the machines of this production (communism), then We would gain from experimentation and potential usage. At this time though, AI only supplants real workers with token slot machine mechanics.. That is until the AI companies raise rates x10 or x100.
There's a lot of other issues as well, and a lot of it is frankly bad in various directions you go.
Copyrights just fucked up, but for a lot of creators, thats how they live. And LLM plundering and commercialization destroys their livelihood. But at minimum, if you're using looted data, you should NOT be able to sell the LLMs nor token access.
Data centers and power is another huge fucking stupid area. Seems like it'd be easy to say "You will set aside solar and battery for your DC 200% your usage". Thats what China does. But the USA solution is fucking Musk's answer of "20 propane engines running constantly". In other words, DC's here are propping up oil/coal/LNG/Propane.
I run Local LLMs. As in the LLM is sitting in my ram and runs when I ask questions. For me, this is a matter that goes back to Marx, about controlling the means of production. It is a new tool at my disposal, and I want to learn it. Nobody can rug-pull my own hardware from me.
I also use open source models, primarily Qwen (Alibaba). Again, this is a data sovereignty issue as well. I do not want to ship what I say to an LLM to a 3rd party (US) provider who'll datamine, advertise, and rat me out all the while profiteering on piracy.
I'm also not an extremist of "AI does everything" or "slop machine clanker POS". We've all seen incredibly dumb AI shit. Ive also seen my local LLM work with me on reverse engineering a BT protocol, and successfully made a FLOSS middle layer. There's also parallels of LLM neuroanatomy compared to human brain organizational patterns.
I also have an inkling in the back of my head. Humans learn. LLMs, when in training, learn. What I dont want is some sort of bullshit copyright perversion where a copyright owner claims they own the knowledge of something (or someone) who studied or read their content. You know, like what already happened with genetics and patenting genes and plants.
(I wrote this with absolutely no llm. Barely even did spellcheck... Which 30y ago was AI. Sigh)