As a new, very green, very basic sysadmin, something missing from the manual vs. let’s ask a LLM mode is… the human element.

Aside from ethical considerations (environmental costs, stolen training data, erosion of skills and critical thinking), I will always choose “manual mode.” Why? I love receiving tech advice from people on the Fediverse.

I will forever remember that it’s @ilja who encouraged me to start self-hosting with #YunoHost.

I first heard about #FreeBSD from @stefano

Earlier this week I learned about #tmux from @teapot_ben and @drfyzziks

Today @antoine_ali helped me get this GoToSocial instance to federate again by restarting dnsmasq.

I know if I run into issues I can just shoot a DM to my tech mentors @stereo and @jan

Let’s not forget the beauty of human connections 💖

@elena @ilja @stefano @teapot_ben @drfyzziks @antoine_ali @stereo @jan

If I run my own LLM, with a completely open weight and open source weights, is it really "stolen knowledge"?

How would that not be similar to a human reading the book, and incorporating the book in their mind?

Its the learning function, transformers, and attention that starts to dispel the 'magic' of humanity. And then the copyright howls.

@crankylinuxuser assuming your question is in good faith and not just trying to push your own views with sophism;

> How would that not be similar to a human reading the book, and incorporating the book in their mind?

even if you consider it similar, if i get the book from the archive of my good friend anna, companies will also consider it stealing. so yes, if that's the rules to play with, it's "stolen training data".. weights being open doesn't really have anything to do with this, it's about the training data and how it was obtained.

don't get me wrong, i personally would rather see copyright be unneeded and abolished, but that's not the context of the OP, nor of your question.

@elena

@ilja @elena

No, I am in good faith talking about this. And I do think there's a LOT of nuance, and also a LOT of bad that can come of the backlash from LLMs.

The biggest argument against thinking machines is how capitalism works, and who it fails. AI is being pushed so it can "solve" wages. If the public owned the machines of this production (communism), then We would gain from experimentation and potential usage. At this time though, AI only supplants real workers with token slot machine mechanics.. That is until the AI companies raise rates x10 or x100.

There's a lot of other issues as well, and a lot of it is frankly bad in various directions you go.

Copyrights just fucked up, but for a lot of creators, thats how they live. And LLM plundering and commercialization destroys their livelihood. But at minimum, if you're using looted data, you should NOT be able to sell the LLMs nor token access.

Data centers and power is another huge fucking stupid area. Seems like it'd be easy to say "You will set aside solar and battery for your DC 200% your usage". Thats what China does. But the USA solution is fucking Musk's answer of "20 propane engines running constantly". In other words, DC's here are propping up oil/coal/LNG/Propane.

I run Local LLMs. As in the LLM is sitting in my ram and runs when I ask questions. For me, this is a matter that goes back to Marx, about controlling the means of production. It is a new tool at my disposal, and I want to learn it. Nobody can rug-pull my own hardware from me.

I also use open source models, primarily Qwen (Alibaba). Again, this is a data sovereignty issue as well. I do not want to ship what I say to an LLM to a 3rd party (US) provider who'll datamine, advertise, and rat me out all the while profiteering on piracy.

I'm also not an extremist of "AI does everything" or "slop machine clanker POS". We've all seen incredibly dumb AI shit. Ive also seen my local LLM work with me on reverse engineering a BT protocol, and successfully made a FLOSS middle layer. There's also parallels of LLM neuroanatomy compared to human brain organizational patterns.

I also have an inkling in the back of my head. Humans learn. LLMs, when in training, learn. What I dont want is some sort of bullshit copyright perversion where a copyright owner claims they own the knowledge of something (or someone) who studied or read their content. You know, like what already happened with genetics and patenting genes and plants.

(I wrote this with absolutely no llm. Barely even did spellcheck... Which 30y ago was AI. Sigh)