Red Hat pushing AI
https://piefed.social/c/fuck_ai/p/1562919/red-hat-pushing-ai
Red Hat pushing AI
https://piefed.social/c/fuck_ai/p/1562919/red-hat-pushing-ai
That’s exactly why I migrated from Fedora to Arch, I want a distro with little to none corporation influence.
To me, the real Linux experience is Arch.
Now I’m thinking of doing the same because the entire reason I chose fedora was to take advantage of their institutional support
But if those institutions are going to sell the floor right under me to ai then there’s no point
It’s international, but there are a lot of Europeans: archlinux.org/people/developers/
CachyOS has its roots in the Polish Arch community if I recall correctly. It’s much less daunting, and I’d highly recommend it.
Check out Garuda. It’s a really stable, well maintained version of Arch. I’ve been running the same install of it for over 9 months with only one screw up that garuda’s own health tool fixed for me.
It’s been a great way for me to learn Linux and Arch, and it rivals Bazzite in my own gaming benchmarks.
thanks! ill look at this
its also the best final fantasy eikon.
IMO there are two main Linux camps, and most users fall somewhere in-between.
Rolling OS lovers who want to tinker (eg Arch).
People who want stability over everything (eg Debian).
The only truly wrong answer is paying for RHEL.
at first I thought you were a troll. after reviewing your post and comment history I can see you’re just mentally challenged.
my condolences to your employer for hiring someone so inept they actually believe it’s difficult to “make their computer work”.
kudos to them though, the handicapped need to make a living as well.
This comment is in the realm of a personal attack, which is again Lemmy.world TOS.
While I may agree that using an LLM for sysadmin is insane, you’re being a complete dick.
this person is clearly a troll. they came to this community, FuckAI, and proceeded to espouse their positive opinion of AI by negatively detracting from a platform that is popular within the community.
I have zero chill with pieces of shit.
I man, I guess I could see it as a sort of training wheels to get going, if one comes in to this cold… There are certain commands that just have very complicated arguments, and particularly if only very occasionally needed, an LLM I could see as helping folks get whatever specific bit they want out of a very verbose usage output or man page.
But it so famously messes up that they have to go lookup any suggestions to understand what it will do better than it would.
If one has no interest in actually learning the command line as a long teem goal though… Just don’t bother. Their usage scenario is almost certainly within the scope of the gui experience. They might be slower on some tasks and unable to scale out their skills to mass operations or create convenient automation, but not everyone has that need.
After evaluation of local models, Claude, gemini 3, chatgpt… It’s certainly interesting but it really messes up a lot and if it has access even in their to anything “real”, it’s supremely dangerous to use if you are not qualified to review the results.
*LLVM?
*LVM?
careful everyone, [email protected] will go through your comments and report all of them.
not surprised though, just disappointed that they don’t have the conviction to stand up for their opinions and instead weaponized mods to shut people up.
That’s a little disingenuous.
Redhat markets support and professional responses in the form of adjusted builds to CVEs on an enterprise distro of Linux.
I mean, yeah, they absolutely technically profit, and open-source is the root of their classic product, so you’re not wrong there; and I’m not saying so. You’re framing it as if they do nothing but take the source, schlepp it out, and collect protection money and keep it themselves.
This kind of characterization completely sidesteps an important truth about open-source devel: everyone involved with projects like the kernel, for instance, work at a shop like redhat and are paid to do the open source work.
Let me say it again: this corp who is profiting off of open source happens to also be donating killed, paid labour to open source. Look at the leaders of the kernel dev effort and you’ll see google, arm, rh, Intel, and suse, all paying for the labour to keep it going.
Stop characterizing them with language suggesting they’re slimy leeches. They make bad decisions a-plenty, but the biggest thing their involvement provides is continuity and lifestyle security for people maintaining what you hold dear.
Condemn them for their stupid decisions - Ansible, Systemd, IBM buying redhat and accidentally speeding up their enshittification because #ibm, etc. But given how little the foundations support this project (the only one I looked at: I’m only slightly less lazy than the person who didn’t even do this) I suspect their cozy little arrangement where they make bank and you get free software to play with is benefitting you more than you’ll admit.
gpt-oss 20B
See all the errors in that rambling wall of slop (which they posted and didn’t even check for some reason?)
Trying to use a local LLM… could be worse. But in my experience, small ones are just too dumb for stuff beyond fully automated RAG or other really focused cases. They feel like fragile toys until you get to 32B dense or ~120B MoE.
Doubly so behind buggy, possibly vibe coded abstractions.
The other part is that Goose is probably using a primitive CPU-only llama.cpp quantization. I see they name check “Ryzen AI” a couple of times, but it can’t even use the NPU! There’s nothing “AI” about it, and the author probably has no idea.
I’m an unapologetic local LLM advocate in the same way I’d recommend Lemmy/Piefed over Reddit, but honestly, it’s just not ready. People want these 1 click agents on their laptops and (unless you’re an enthusiast/tinkerer) the software’s simply not there yet, no matter how much AMD and such try to gaslight people into thinking it is.
Maybe if they spent 1/10th of their AI marketing budget on helping open source projects, it would be…
Yeah, that is a great application because you can eyeball your bash script and verify its functionality. It’s perfectly checkable. This is a very important distinction.
It also doesn’t require “creativity” or speculation, so (I assume) you can use a very low temperature.
Contrast that with Red Hat’s examples.
They’re feeding it a massive dump of context (basically all the system logs), and asking the LLM to reach into its own knowledge pool for an interpretation.
Its assessment is long, and not easily verifiable; see how the blog writer even confessed “I’ll check if it works later.” It requires more “world knowledge.” And long context is hard for low active parameters LLMs.
Hence, you really want a model with more active parameters for that… Or, honestly, just reaching out to a free LLM API.
Thing is, that Red Hat’s blogger could probably run GLM Air on his laptop and get a correct answer spit out, but it would be extremely finicky and time consuming.
Seems like everyone taking a left turn into crazy ville ……
This is a good approach.
It’s an mcp server, a “bridge”. A standard way LLMs could talk to your system. It’s not an LLM. It doesnt mandate an LLM. It doesn’t tie you to a specific LLM
It’s optional. Don’t use it, or don’t install it. No harm done. Even if it’s installed and running, if you don’t use an LLM with local access, no harm done.
Even the increased attack surface is not a big deal since it is local, optional, and focuses on reading statuses rather than executing actions
It’s an open standard. If you decide to use it with an LLM but don’t like the results, try a different LLM
Yeah, most people, myself included, don’t like AI on principle, but there are valid use cases for it, and not having the capability of integrating with AI tools is going to be a dealbreaker for someone.
That said, I’ve heard MCP is a bit of a shitshow of a standard and is woefully inefficient.
Yeah, I suppose so. I find plenty to hate about the way companies and too many individual apps talk about LLMs. I really hate that it’s one of the metrics my employer looks at. I always hate how wasteful speculative bubbles like this are.
But maybe this isn’t the place for me since I see some good use cases and appreciate the few times someone does it right.