What's the most common complaint I've heard about Linux?

Not the installation process.
Not finding a distro.
Not getting programs to work.
Not troubleshooting.
Not hardware compatibility.

The most common complaint about Linux I've seen is this:
For a normal computer user, asking for help is just about impossible.

They ask a simple question and:
People respond "Did you Google it?"
People complain that the question wasn't asked "correctly".
People respond "RTFM"
People get mad??? at them for making an easy mistake.

We can't expect normal people to know to, or even know how to deal with any of that stuff.

Search engines these days are awful, manuals are hard to read for most people (especially stuff like ArchWiki), and normal people make mistakes we think are easily avoidable.

The solution to making Linux more popular is not ruthless promotion. The solution is to actually help the people who are trying to use it.  

#Linux

@Linux_in_a_Bit not true anymore.
With AI integrated in most search engine, you often get the right response from it.
One of the few benefits of AI is that it can basically customise the documentation to make it sensible to you. It becomes a kind of live documentation.

A simple how to fix … on [distro name] works 95% of the time in my experience.

@CedC Do not peddle AI slop as the savior here. AI is not helpful, it is not useful. It is a prediction engine of what sounds like the right answer. Not what is the right answer, but what will sound plausibly like a correct answer.

That slop is part of the reason why the kindness in the Linux community is so important right now. AI is putting a lot of bad information out there. It is making up urls for people to download packages from that malicious folk then go and register domains for to offer up malware to these trusting people. It makes up names of packages and programs that do not exist, sending users into forums asking for total nonsense because the pedo-bot or the bullshit engine told them that would fix their problem.

@deathkitten
You are going to make me soud like an AI fan, which is not the case, but your statement is incorrect.

Yes AI is a prédiction engine, but so are we.

If you make a llm play chess, which is not what it has been trained for, we now have proof that it _does_ create an internal representation of the board and its pieces event if it is not supposed to "know" the rules.

1/2

@deathkitten @CedC

"proof" o_O

@pikesley @deathkitten yeah, I can find back a few papers if you want
@deathkitten @CedC go for it, the notion that an LLM has an internal representation of *anything* is, um, crackpot at best tbh
@deathkitten @CedC did you find those papers mate?

@pikesley @deathkitten

This is a good start:

A general-purpose language model is capable of playing at a fairly good level (>1750 Elo) by exploiting its native capabilities, as Matthieu Acher shows on his blog:
https://blog.mathieuacher.com/GPTsChessEloRatingLegalMoves/

Debunking the Chessboard: Confronting GPTs Against Chess Engines to Estimate Elo Ratings and Assess Legal Move Abilities

Can GPTs like ChatGPT-4 play legal moves and finish chess games? What is the actual Elo rating of GPTs? There have been some hypes, (subjective) assessment, and buzz lately from “GPT is capable of beating 99% of players?” to “GPT plays lots of illegal moves” to “here is a magic prompt with Magnus Carlsen in the headers”. There are more or less solid anecdotes here and there, with counter-examples showing impressive failures or magnified stories on how GPTs can play chess well. I’ve resisted for a long time, but I’ve decided to do it seriously! I have synthesized hundreds of games with different variants of GPT, different prompt strategies, against different chess engines (with various skills). This post is here to document the variability space of experiments I have explored so far… and the underlying insights and results. The tldr; is that gpt-3.5-turbo-instruct operates around 1750 Elo and is capable of playing end-to-end legal moves, even with black pieces or when the game starts with strange openings. However, though there are “avoidable” errors, the issue of generating illegal moves is still present in 16% of the games. Furthermore, ChatGPT-3.5-turbo and more surprisingly ChatGPT-4, however, are much more brittle. Hence, we provide first solid evidence that training for chat makes GPT worse on a well-defined problem (chess). Please do not stop to the tldr; and read the entire blog posts: there are subtleties and findings worth discussing!

@pikesley @deathkitten

LLMs can develop internal representations that enable forms of emergent reasoning, even if imperfect:
• Othello-GPT: the model reconstructs the state of the board without explicit supervision, see https://arxiv.org/abs/2210.13382
• Chess & LLMs (2024): GPT-4 achieves ~1700 Elo with structured prompts, see https://arxiv.org/abs/2403.15498

Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task

Language models show a surprising range of capabilities, but the source of their apparent competence is unclear. Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state. Interventional experiments indicate this representation can be used to control the output of the network and create "latent saliency maps" that can help explain predictions in human terms.

arXiv.org

@pikesley

Did you end-up reading « Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task »
How did you find it ?
Quite surprising is not it ?

@CedC there are certainly a lot of words
@CedC not so much "proof" though