What's the most common complaint I've heard about Linux?

Not the installation process.
Not finding a distro.
Not getting programs to work.
Not troubleshooting.
Not hardware compatibility.

The most common complaint about Linux I've seen is this:
For a normal computer user, asking for help is just about impossible.

They ask a simple question and:
People respond "Did you Google it?"
People complain that the question wasn't asked "correctly".
People respond "RTFM"
People get mad??? at them for making an easy mistake.

We can't expect normal people to know to, or even know how to deal with any of that stuff.

Search engines these days are awful, manuals are hard to read for most people (especially stuff like ArchWiki), and normal people make mistakes we think are easily avoidable.

The solution to making Linux more popular is not ruthless promotion. The solution is to actually help the people who are trying to use it.  

#Linux

@Linux_in_a_Bit While I agree with all that, it is then again equally annoying when those "noobs" either want to go directly into customizing/theming/"ricing" (hate that word) within the first 24 hours they are using their distro and are frustrated when this involves more than "double-clicking" an *.exe. on the other hand a lot of people REALLY try hard to find ways to make everything as close as possible to win7/10/11 as possible which will also fail in the long run
@Slacker why is that annoying?

@malte @Slacker because you don't buy a car to tweak the engine until you know how the car works first. Then you learn about the engine. Then you tweak it.

Many 'noobs' are mad there isn't a bolt-on upgrade to rice it. i.e. a double-click method and that it takes some learning.

At least, this is the experience I've had, and so I just don't bother helping anymore.

@Slacker @Kancept who is "you"?
@malte
Generic "you", aka "one"
@Kancept

@malte @Slacker @Kancept On the one hand:

You deserve to be appreciated when offering help to a 'noob', & their frustration does not make it okay for them to be rude. You don't need to put up with abuse.

On the other hand:

"I won't help you b/c you were too frustrated by your problem to adhere to my expectations, & I did not have the patience to tolerate incivility which I knew was not directed at me" doesn't seem like a viable solution.

Thoughts?

@Kancept @GoodNewsGreyShoes idk. to me it sounds like @Slacker is annoyed by people who get excited, which is a bit of a dick move. let people be excited, and work on your own ability to let people be excited 🤷‍♀️

@malte @Kancept @Slacker I fully agree:

"New users shouldn't assume they can easily optimize this operating system that's lauded for its optimizability & being more user-friendly than it's ever been" is unrealistic.

New users aren't going to stop wanting the nice things that Veteran users keep bragging about as reasons they prefer Linux.

@GoodNewsGreyShoes @malte @Slacker @Kancept I totally agree with this. an important aspect of emotional maturity is being able to see someone getting frustrated at something that you like, and not taking that as frustration at you, but rather meeting them where they are and saying “I totally understand why you’re frustrated. would you like some help? this was hard for me too at first but I can share what I know”

I get frustrated at any tech that I don’t immediately understand because it makes me feel incredibly stupid to see others using it (seemingly) so effortlessly. and I try to show others the same understanding and respect that I would like to be shown when I feel that way

@kasdeya

>> “I totally understand why you’re frustrated. Would you like some help? This was hard for me too, at first, but I can share what I know.”

This is a *phenomenal* way to respond to an upset/impolite request.👌💯🏆

- validates their concern & experience, *twice*
- indicates interest in & value of their goals
- sets reasonable expectations for support
- mutually disarming invitation

@Slacker @Linux_in_a_Bit
Ok, here's my latest: Debian Trixie XFCE, I
recently relocated it, and now use my TV as the monitor. Now, whenever I switch the TV to the HDMI input the computer is attached to, the Display Settings dialog pops up for a "new monitor" (which it actually misidentifies, but selects the 'correct' default resolution).

IMHO, the dialog should time out and close, but since it won't, I select accept/ok to dismiss it, but it recurs the next time the TV input is selected.

Good luck searching for that, let alone solve it (I do have something to try, but am often stymied when the solution is several years old, and that setting no longer exists, or has been subsumed into systemd, or whatever).

@Linux_in_a_Bit not true anymore.
With AI integrated in most search engine, you often get the right response from it.
One of the few benefits of AI is that it can basically customise the documentation to make it sensible to you. It becomes a kind of live documentation.

A simple how to fix … on [distro name] works 95% of the time in my experience.

@Linux_in_a_Bit @CedC 95% means you might break your system after being curious or frustrated 20 times. you need to be really boring to make it far in these conditions 😱

@malte @Linux_in_a_Bit I might have grown boring with age, but I do seldom have problem to fix and it just works.

I got started on typst this way very fast as well.

Sure it does not work 100% of the time but the few cases it does not we can ask experts and provide them interesting cases.

@CedC @Linux_in_a_Bit Or… consider this: it also often hallucinates complete bullshit. 😊 No, LLMs are not a solution.

@Razemix @Linux_in_a_Bit yes it does allucinate, not its not «often», and most of the time it does it is because the answer is not documented.

And if it does... Well it will simply not work.

LLM is a (biais) tool with a _few_ use cases; To me documentation is one of them.

@CedC Do not peddle AI slop as the savior here. AI is not helpful, it is not useful. It is a prediction engine of what sounds like the right answer. Not what is the right answer, but what will sound plausibly like a correct answer.

That slop is part of the reason why the kindness in the Linux community is so important right now. AI is putting a lot of bad information out there. It is making up urls for people to download packages from that malicious folk then go and register domains for to offer up malware to these trusting people. It makes up names of packages and programs that do not exist, sending users into forums asking for total nonsense because the pedo-bot or the bullshit engine told them that would fix their problem.

@deathkitten
You are going to make me soud like an AI fan, which is not the case, but your statement is incorrect.

Yes AI is a prédiction engine, but so are we.

If you make a llm play chess, which is not what it has been trained for, we now have proof that it _does_ create an internal representation of the board and its pieces event if it is not supposed to "know" the rules.

1/2

@deathkitten
Indeed AI is used to create crappy content and the web is enshitifying at speed because of it, but that does not mean there a no good use cases.

It reminds me of people saying in the 80 that computer was the tool of the devil.
2/2

@CedC Until AI cleans up its morality problem, there is no good use case for it.

I am tired of tolerating corporations doing horrific things in case they produce something useful
by accident without forcing them to behave with peoples' safety and needs first.

AI is used to undermine labor, to cut benefits and pay for the jobs they're half-ass replacing, AI is giving people unsafe and bad advice when it's used to summarize search results, AI is being deployed early and often without any guardrails to protect the users, AI is literally driving shortages in computer hardware for average consumers based on promises of future money that doesn't even belong to any of the companies involved yut. AI is tainting water supplies. AI is making it harder for the average user to navigate the web, to find useful information, to use the apps and programs they're required to use for their day jobs, their government benefits, their healthcare.

So, until we can get those scales a little better balanced, I am not going to give it the benefit of the doubt. Until the companies trying to shove it down all our throats against our will (with a "maybe later" instead of a "no" button, might I add). Yeah, there are things where text prediction LLMs
can be used productively, but the way it is being massively deployed without any care of the safety or desire to use it by the typical user is utterly irresponsible and outweighs all other considerations.

@deathkitten I mean anything corporations do is questionable at least. We cannot expect morality from it.

AI, sure

Industrialization, pretty much.

Until we change the system we cannot expect morality from it. But that's not the topic. I was just saying llm can help Linux newbies.

And we could probably do it with local fine tuned models. That would be cool and ecological.

@CedC Considering that everyone seems to still be using the meat grinders provided by the corporations to try to build these locally trained fine tuned models, I'm still unsure how we can achieve an ethical and useful option. And given that the open source community does not get the fiscal and other support it needs for the work it already does, I'm sure not going to ask them to reinvent the meat grinder for this project.

I wish you luck on this dream, but as you pointed out yourself, anything corporations do is questionable at least, and unfortunately the state of our society is such that open source is forced to depend too much on corporations for support. I mean, FFS, the most popular place to host your code online these days is owned by Microslop.

I know the system needs to be rebuilt, and that's what I want. I also know that burning it down without rebuilding first is going to leave a lot of people out in the cold. I don't have a good solution, I just have a lot of anger I'm trying to find productive places to put it. And I am exhausted that every CEO and their mother is trying to force feed me AI. So I am also exhausted when people try to remind me that there are some good uses for it.

I've spent too much time in a world that already tells me I'm not enough because I'm a woman and because I have ADHD. I don't also need to be told I'm unreasonable for trying to draw a line in the sand of 'no more AI until corporations are forced to behave better'.

@deathkitten did not meant to upset you sorry.
I see you do have lots of anger and I hope you find a good way to use it.

We need it to change this broken world.

We don't always have all the background of people when discussing online so I am not sure how much i heart you but I wan't to tell you one last important thing : you are enough.

@CedC You've actually been genuinely polite, unlike the other person I got into a conversation with under this original post. I disagree with the point you're driving for, but at least you aren't being rude.

@deathkitten @CedC

"proof" o_O

@pikesley I just wanted to say I love your display name.
@pikesley @deathkitten yeah, I can find back a few papers if you want
@deathkitten @CedC go for it, the notion that an LLM has an internal representation of *anything* is, um, crackpot at best tbh
@deathkitten @CedC did you find those papers mate?

@pikesley @deathkitten

This is a good start:

A general-purpose language model is capable of playing at a fairly good level (>1750 Elo) by exploiting its native capabilities, as Matthieu Acher shows on his blog:
https://blog.mathieuacher.com/GPTsChessEloRatingLegalMoves/

Debunking the Chessboard: Confronting GPTs Against Chess Engines to Estimate Elo Ratings and Assess Legal Move Abilities

Can GPTs like ChatGPT-4 play legal moves and finish chess games? What is the actual Elo rating of GPTs? There have been some hypes, (subjective) assessment, and buzz lately from “GPT is capable of beating 99% of players?” to “GPT plays lots of illegal moves” to “here is a magic prompt with Magnus Carlsen in the headers”. There are more or less solid anecdotes here and there, with counter-examples showing impressive failures or magnified stories on how GPTs can play chess well. I’ve resisted for a long time, but I’ve decided to do it seriously! I have synthesized hundreds of games with different variants of GPT, different prompt strategies, against different chess engines (with various skills). This post is here to document the variability space of experiments I have explored so far… and the underlying insights and results. The tldr; is that gpt-3.5-turbo-instruct operates around 1750 Elo and is capable of playing end-to-end legal moves, even with black pieces or when the game starts with strange openings. However, though there are “avoidable” errors, the issue of generating illegal moves is still present in 16% of the games. Furthermore, ChatGPT-3.5-turbo and more surprisingly ChatGPT-4, however, are much more brittle. Hence, we provide first solid evidence that training for chat makes GPT worse on a well-defined problem (chess). Please do not stop to the tldr; and read the entire blog posts: there are subtleties and findings worth discussing!

@pikesley @deathkitten

LLMs can develop internal representations that enable forms of emergent reasoning, even if imperfect:
• Othello-GPT: the model reconstructs the state of the board without explicit supervision, see https://arxiv.org/abs/2210.13382
• Chess & LLMs (2024): GPT-4 achieves ~1700 Elo with structured prompts, see https://arxiv.org/abs/2403.15498

Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task

Language models show a surprising range of capabilities, but the source of their apparent competence is unclear. Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state. Interventional experiments indicate this representation can be used to control the output of the network and create "latent saliency maps" that can help explain predictions in human terms.

arXiv.org

@pikesley

Did you end-up reading « Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task »
How did you find it ?
Quite surprising is not it ?

@CedC there are certainly a lot of words
@CedC not so much "proof" though
@pikesley @CedC @deathkitten LLM are somewhat essentialization engines, they learns characteristics of what they must reproduce. Those "summarized" characteristics are embodied in embeddings. It is possible to a certain extent to see that as what the LLM "knows".

When you have trained your model, embeddings alone can be valuable as "knowledge"
LLM Embeddings Explained: A Visual and Intuitive Guide - a Hugging Face Space by hesamation

How Language Models Turn Text into Meaning, From Traditional

@CedC @deathkitten @pikesley (idem, I'm no AI fan here, just a curious dude)
@clovis @deathkitten @pikesley
Agreed, but I find it more impressive for a machine to “think” or getting close to it that being a know-it-all with a lot of embedding

@CedC sounded like an AI fan in your first post. Block.

People need to be able to trust each other to get and give technical help that can affect quality of living.

Trust is broken when someone in the conversation tries to promote chatbots and bullshit coding programs in lieu of the understanding, sympathy, and patience requested in the top of the thread.

@CedC @Linux_in_a_Bit

> how to fix no sound on Ubuntu

I don’t even know how to do that and no AI one-liner is going to save any of us, let alone somebody coming from Windows who’s afraid of a terminal.

Let’s say most things are now easier for most people, but a knowledgeable human is going to have to deal with this question either way.

@CedC @Linux_in_a_Bit I might get hate from my Fedi ingroup for this but I find this to be an extremely good use of AI. I use Perplexity (a really nice AI search engine tool) for quickly learning technical stuff that would take me a ton of work reading scattered, sparse documentation otherwise

the trick is to only ask it for information that you can immediately test/verify

(with this said, I don’t financially support AI companies ever because I’m very worried about the risks posed by AI)

@CedC @Linux_in_a_Bit and the other great thing (/s) about those answers is that they have no responsibility or safeguards to stick to the truth, so it's always a fun little gambling game of "will this work, do nothing, or brick my device?"
So much better than asking real people who actually know about a thing and can give you an accurate and nuanced answer!
@Linux_in_a_Bit ubuntu understood that in 2004 and that's why they are the default thing to this day

@Linux_in_a_Bit

What's the most common complaint I've heard about Linux?idk maybe try googling it first??? Didn't read the rest of the post smh

@Linux_in_a_Bit It might sound simple and I am aware people often volunteer but not getting a reply after hours of waiting is even for me as a nerd very frustrating. At least after a while have someone say "sorry it seems we can't help you either. Maybe you can leave a ticket on our tracker/mailinglist" or something along those lines. That often would have made me feel better than the feeling of being ignored or worse feeling I asked something so stupid nobody wants to talk to me.
@mtrnord @Linux_in_a_Bit as frustrating as that is, it helps to remember people that do help are global and probably not in your time zone.
@Linux_in_a_Bit @Kancept sure. But in days where chats are not fire and forget like irc the chat is asynchronous. So after a day or 2 the timezone argument IMHO doesn't work anymore. I am totally fine if a response takes a day or so. Sure it is frustrating it takes that long if something breaks on you but its reasonable. But beyond that it quickly turns into feeling like you aren't heard

@mtrnord @Linux_in_a_Bit no, I get that. It's like yelling into the void. You had said a few hours in your comment. A day or so, I can see the frustration.

For me, it's not so much time, but how so many use Discord these days as a support channel. No history to even search, really.

@Linux_in_a_Bit

The funniest thing is when you google a problem and all the threads that pop up tell you to “just google it.”

Actual clown-shoes behavior. No desire whatsoever to actually understand the user, they think “PEBCAK” is the funniest concept ever.

@Linux_in_a_Bit
Another big issue is the intense use of jargon in replies to questions. Sure, it's a faster way to get information from your brain onto a forum, but a new user to Ubuntu is not going to understand it, and isn't likely to go looking up every third word.

@Bwaz @Linux_in_a_Bit

AVOID JARGON

SPELL 👏
OUT 👏
ACRONYMS 👏
(First time you use them)

@Linux_in_a_Bit You're right, but just saying, Kimi Code 2.5 is currently free to use, and can make a fantastic technical coach/explainer for just the sort of noob you're describing.

@Linux_in_a_Bit Offer to pay for it maybe vOv

I hear you. I've been frustrated too. But you're asking people to share expertise for free when they honestly have already shared a whole crap ton of it.

Maybe people who can't understand that should stick to the proprietary platforms who are willing to monetize your soul as collateral instead.