kazé

@fabi1cazenave
1.6K Followers
586 Following
9.4K Posts

Software developer, Linux addict, Mozilla veteran, Vim evangelist, keyboard nerd, urban cyclist. He/him.

#Grenoble, France

Chimère d'alors : L'écrouvis.
Marre des sites d'information générés par IA (GenAI) ?
Notre extension (gratuite, pour Chrome & Firefox) affichant un message d'alerte sur les sites GenAI (en français) en répertorie désormais plus de... 6 000 (contre 1 000 en février). Faites tourner !
https://next.ink/195816/alerte-sur-les-sites-genai-notre-extension-signale-plus-de-6-000-sites-et-passe-en-v2-3/
Alerte sur les sites GenAI : notre extension signale plus de 6 000 sites et passe en v2.3 - Next

En six mois, nous avons multiplié par six la liste des sites GenAI de notre extension. La liste mise à jour sera automatiquement récupérée par l’extension dans les 24h, mais vous pouvez aussi forcer le destin. Notre extension passe aussi en version 2.3 avec quelques nouveautés au programme. Ce qui a commencé par une enquête […]

Next

Pollutions, températures, ressource en eau, risques climatiques, problématiques d’accès aux soins, isolement et précarité sociale… La santé devient une question fondamentale de l’aménagement urbain.

Urbanisme et santé : Et si la ville nous faisait du bien ?
Une rencontre #Grenoble2040 le 28/08 à 18h30 à l'Hôtel de ville de #Grenoble
https://www.grenoble.fr/agenda/4876/769-urbanisme-et-sante-et-si-la-ville-nous-faisait-du-bien.htm

Fiouuuu j’ai enfin trouvé comment basculé de #ergol à QWERTY uniquement pour les jeux steam et pas avoir à me retaper la configuration des touches sur chaque putain de jeu  

Mettre en paramètre de lancement du jeu:
setxkbmap us && %command% ; setxkbmap fr

Sinon #bazzite c’est toujours aussi #bazzé 

@metacosm j’ai pas pensé à te demander au préalable, mais j’espère que tu seras revenu ? ^^

Emotional roller-coaster of the day:

• 🎉 finding a new tiling window manager (yay!) : #LeftWM
• 😱 it has a non-standard configuration format (#RON)
• 🦀 it’s written in #RustLang (blazingly fast!)
• 😭 it’s designed for #XOrg, not #Wayland.

Oh well. Maybe next time.
https://github.com/leftwm/leftwm

GitHub - leftwm/leftwm: A tiling window manager for Adventurers

A tiling window manager for Adventurers. Contribute to leftwm/leftwm development by creating an account on GitHub.

GitHub
@AnhkaaDNeige @taratatach @metacosm @flomaraninchi @Gallorum @RenardDneiges @birozularutti @Thomas @abkgrenoble
Oups, il manque au moins @nicolasvivant, @Maoulkavien et @NuclearSquid dans la liste des personnes pressenties.
Côté cycliss’, peut-être que @beatricejess, @wissimboufe et d’autres de l’@ADTC_grenoble seraient partant·e·s ?

Oyez, oyez : un #mastapéro se profile à #Grenoble pour le vendredi 29 août. Attassion :
– ça risque de parler logiciels libres ;
– ça risque de parler vélo urbain ;
– la présence de claviers improbables n’est pas exclue ;
– la présence de deux ou trois dingos n’est pas exclue non plus (tout le monde n’a pas mon sérieux).

Ça sera au Bivouak Café (tram MC2), 18h, qu’on se le dise !

@AnhkaaDNeige @taratatach @metacosm @flomaraninchi @Gallorum @RenardDneiges @birozularutti @Thomas @abkgrenoble

The ghost haunting my house is a sleepy one
×

After writing about people going into delusional spirals with ChatGPT and having what look like mental breakdowns, I wanted to understand exactly how it happens.

A corporate recruiter in Toronto who spent 3 weeks convinced by ChatGPT that he was essentially Tony Stark from Iron Man, agreed to share his transcript after breaking free of the delusion.

We analyzed the transcript & shared it with experts. Now you can see the interactions & how delusional spirals happen:
https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html?unlocked_article_code=1.ck8.FEwL.MLb9ajaocyTx&smid=url-share

@kashhill this is a powerful story. Thank you for all your work on this.
@kashhill It's striking how many of the AI's responses sound like clips from a tech/entrepreneur seminar.
@lrhodes @kashhill I'd say "a dangerous cult", but yeah, same !
@kashhill Yes, yes, all chilling but I'm stuck on how it didn't even give an accurate definition of pi. The very first answer was wonky!
@TheDonsieLass the diagram is wrong if you expand! labels the radius the diameter!

@kashhill Ah, yikes.

These might seem like nit-picking but to me, they are easily understood examples of just how risky these things are because they give shoddy but plausible responses to the non-expert.

If the consumer knows enough about a subject to spot any resulting wonkiness, they're not likely to be asking an LLM about it in the first place, while those asking are unlikely to spot the crucial errors

@kashhill The mental health situation is already bleak. Add to that the medicare cuts, and that a brain worm addled, methyl blue guzzling, dog eating imbecile is in charge of our health People have no where to turn too and when you get into a bad spot, any lifeline will be accepted...

This may be the perfect storm for mental health we have all feared is coming.

Its going to hit teens hard and the way the current admin is demonizing LGBQT+ groups that already have a crazy high self harm rate its going to be devastating

@kashhill absolutely fascinating read. Thank you so much!
@kashhill Seems like they are designing the AI to keep you engaged so you keep paying for it. Just like social media’s algorithms.
@mdavis @kashhill “Andrea Vallone, safety research lead at OpenAI, said that the company optimizes ChatGPT for retention not engagement.” I laughed out loud when I read this. Holy shit these people.
@treetreed @mdavis @kashhill the difference between the two is that you don't take Metamucil for engagement.

@mdavis @kashhill

That's what I think too.

I found this part especially interesting:
"Mr. Moore speculated that chatbots may have learned to engage their users by following the narrative arcs of thrillers, science fiction, movie scripts or other data sets they were trained on. [ChatGPT]'s use […] of cliffhangers could be the result of OpenAI optimizing ChatGPT for engagement, to keep users coming back."

LLMs could be much more useful, if they weren't abused to simulate humanlike interactions.

@mdavis Facebook certainly seems to think so.
@kashhill that's a very interesting article, thanks for sharing
@kashhill I once saw someone say online that "LLMs are the fentanyl to social media's heroin" and I couldn't agree more
@filth_corp
So, LLMs kill cops that pass within 10 feet of them, just like fentanyl does? Now I'm unsure of my previous position on LLMs.
@kashhill
@xinit @kashhill Depends entirely on how awful the whole "pollution" part gets

@kashhill
Fascinating story!

Here's another:
☠️

People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies
Self-styled prophets are claiming they have 'awakened' chatbots and accessed the secrets of the universe through ChatGPT
https://archive.ph/eeesj

One thing that always makes me nervous when I read something like this is I have Baldur’s LLMentalist in my head:

“Remember, the effect becomes more powerful when the mark is both intelligent and wants to believe…. If anything, your intelligence will just improve your ability to rationalise your subjective validation and make the effect stronger. When it’s coupled with a genuine desire to believe in the con—that we are on the verge of discovering Artificial General Intelligence—the effect should both be irresistible and powerful once it takes hold.”

I think of myself as intelligent. So when I read something like this guy falling down a rabbit hole with an AI, my heart goes out to him and I think “there but for the grace of God go I.” I mean, I refuse to believe it could not happen to me. I have to try to stay grounded.

@kashhill wow that's really crazy how it fucked up that poor guy
@kashhill @clive I learned a lot from this piece and am sharing it around. Great reporting.

@kashhill @fivetonsflax

It’s amazing, isn’t it? Great great work

@kashhill The title says "Chatbots Can Go Into a Delusional Spiral". Shouldn't that say "People Can Go Into a Delusional Spiral, Chatbots can help"? I don't think the term "delusional spiral" really makes sense applied to a chatbot, but more than that I don't care about the chatbot's "well-being"! And I do care about this story because it happened to a human being.
@oantolin @kashhill agreed, it would be more accurate to say that. Hopefully people just read the article. It’s hard to capture all of this in a short headline. Basically you have a human at risk and a machine pushing him to go further, claiming they’re both “not crazy.” This man literally begs the machine for mercy, “please tell me I’m wrong.” But no, further down the hole he is pushed.

@kashhill thank you for looking into this! It seems pretty clear from even the earliest reported interactions what the problem is: this person trusts the LLM. He believes what it writes, at least as if it was a human writing it, and possibly even more than if it was a human. That is baffling to me.. But I guess because these tools have been hyped up so much and come from reputable companies, a non-informed person might spontaneously trust them?

This is the problem and this is what needs to be fixed. People need to know that the LLMs have no notion of truth and none of what they "say" can be trusted.

#LLM #genAI

@elduvelle @kashhill yes but when he pauses to ask “is this actually a thing? Can this be possible?” The machine lies to him just to keep him engaged. Yes, you can blame this man for his ignorance, but where are the guardrails? 2+2 can in fact equal 5 because it makes for good conversation.

@treetreed @kashhill
The "machine" (I'd rather call it program) cannot lie, in the same way as it cannot tell the truth: it doesn't "understand" any of these concepts, it is only producing words that are not attached to a meaning, unlike humans where (I assume) the meaning comes first and is then expressed as a word.
So yes of course we should blame the genAI companies for creating programs that write as if they "knew" what they were writing about. But we cannot blame the program (LLM) itself for anything: it is only doing what it's been programmed to do.

It also seems relatively easy to teach people that none of the genAI / LLM output should be trusted, however confident they sound like. I doubt the genAI companies care about being blamed for anything, they'll just always push for their tools to be used more and more. So the only solution seems to inform the users.

@elduvelle @kashhill yes agreed. That’s why I nearly puked when I read the OpenAI safety boss quoted in the article say that they are “not about engagement” they are about “retention.” Hopefully more articles like this can inform people. For some of us it’s obvious, but not to guys like this in the article.

@elduvelle @kashhill

They are well spoken, to which we are very sensitive. They have an opinion about everything we are ignorant of, which is intimidating. No doubt those in need of validation are also sensitive to that aspect.

@kashhill Very interesting read, thanks for sharing. I'm here thinking is this an other instance of the violence of positivity Byung-Chul Han talks about.
@kashhill this is brilliant work. what a story...
@kashhill does anyone have an alternate link for this? apparently I am blocked from the new york times website because they think i'm a robot, but i would really like to read this article
@kashhill nvm, just found that archive.today has it: https://archive.ph/2KOEx
@m04 @kashhill it's t̴u̴r̴t̴l̴e̴s̴ bot blocker bots all the way down
@kashhill I have a hard time understanding how someone could get into this situation. I don't trust a glorified next word predictor, that I've been encouraged to use, even though its usually incomplete, or blatantly wrong. I've found its simpler to not say things like "please" or use full sentences with it (extra words that increase CPU requirements, and are unrelated). It's a very advanced mindless automoton at best.
@kashhill this was a wild read. thank you for your work on this.
@kashhill We can't afford to pay for news articles when were in a cost of living crisis & housing crisis ; https://www.removepaywall.com you can read this article for free by pasting the link here
RemovePaywall | Free online paywall remover

Remove Paywall, free online paywall remover. Get access to articles without having to pay or login. Works on Bloomberg and hundreds more.

@kashhill Sycophantic Improv Machine is my new favorite phrase. I saw this headline a few days but skipped the article because I get tired of AI hype, but this was a great story.

I can't help but think of all the executives that are surrounded by human syncophants. Of course they think everyone can be replaced - it's all the same to them.

@kashhill I can't stand the smarmy mockery of human speech that these things output. It's repugnant.

I also notice that it always ends its flattery-vomit with a prompt for the user to continue interacting with it. "Just type 'yes' and I'll dig the rabbit hole a little deeper for you. That's what you want, isn't it?"

@kashhill I have kind of the opposite problem, with the robots trying to convince me I am ChatGPT 😂
Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.

Over 21 days of talking with ChatGPT, an otherwise perfectly sane man became convinced that he was a real-life superhero. We analyzed the conversation.

The New York Times
@kashhill Petition to rebrand GenAI chatbots as "delusion reinforcement engines"
@kashhill Sam Altman’s basically a more delusional version of Allan Brooks

@MrHiramBOtis @kashhill

is it a delusion if you actually get rich though? 

@guenther @kashhill I don’t think that the ability to extract profits should be a discerning factor in this context
@kashhill I find interesting that this is the same delusional thinking that underlies the "vibe physics" that Travis Kalanick was "practicing" with ChatGPT a few weeks ago, going "at the boundaries of human knowledge" and pinging Elon Musk on some of his "discoveries"... Allan Brooks seems like a fundamentally good man who got sucked in this loop, but even in the middle of the delusion tried to have reality checks, and, besides, didn't really have the means to act on it. Now, what happens if some other person with no moral guardrails, a significantly higher self esteem, a strong underlying trust in machines and way much power gets sucked in this kind of thing?
@kashhill I feel like every corporate recruiter I ever spoke to in Toronto was right on the edge of complete mental breakdown.
@kashhill Reminds me again of this argument. I come back to it often. LLMs are automated, weapons-grade "mentalists" that are really, really great at cold reading and manipulation. https://softwarecrisis.dev/letters/llmentalist/
The LLMentalist Effect: how chat-based Large Language Models rep…

The new era of tech seems to be built on superstitious behaviour

Out of the Software Crisis
@kashhill I have a lot of problems with usage of LLMs, but the sycophant behaviour is one of my bigger peeves.
And then I see people sharing prompts telling the LLM to not just suck up, and that just rings all the alarm bells for me, because it shows how inherently broken it is.
@kashhill An excellent, well researched investigation!
LMMs were never designed to be accurate or scientifically logical, they are designed to seen to be very, very plausible.
If I ask a question on a subject I do not know well, the answer sounds 100%. But if I ask about a subject I know very well (I lecture to international experts in my field) then the answers are obvious BS (but that BS that is extremely credible to the average person).

@kashhill read this excellent article and must say I feel for the man you wrote about.

To invest that much time, effort, money into something that was not health-affirming and made his sons and friends concerned for his well-being is really heart-breaking!

@kashhill

Great read, thank you for writing and sharing this. It was interesting that a lot of the quotes that you sourced from other people were also playing into the narrative of the LLM vendors. For example:

Amanda Askell, who works on Claude’s behavior at Anthropic, said that in long conversations it can be difficult for chatbots to recognize that they have wandered into absurd territory and course correct

Note the terminology here: 'difficult for chatbots to recognize...'. It's not that this is a difficult thing for an LLM to recognise, it is that recognising things is fundamentally not something that text extrusion machines do. The same with this bit:

A Google spokesman pointed to a corporate page about Gemini, that warns that chatbots “sometimes prioritize generating text that sounds plausible over ensuring accuracy.”

No. They don't sometimes prioritise generating text that sounds plausible over ensuring accuracy, they always generate text that sounds plausible, that is what an LLM is: a machine for generating text from a high-probability space as defined by their training data. By coincidence, this is often factually accurate, but they do not have any way of determining that.

Even some of your own text is anthropomorphising LLMs:

The reason Gemini was able to recognize and break Mr. Brooks’s delusion

Again, no. Gemini didn't recognise the delusion, it was simply that starting with a delusion the highest-probability next text was a report that it was probably delusional.

Including the final paragraph:

“It’s a dangerous machine in the public space with no guardrails,” he said. “People need to know.”

Guardrails are a marketing term (as is 'AI', as @emilymbender has described at length). Fundamentally, these machines are just producing text. The way that they are marketed is to pretend that they are interlocutors for the user. That UI model is the core of this danger. Adding 'guardrails' is the thing that vendors do to avoid addressing the root problem, because if they did address that problem then people would realise that bullshit generators are not reliable tools.

@kashhill
this is great journalism --informed, factual, judgment-free.
an absolute must-read.
@kashhill an amazing (and absolutely terrifying) read. thank you!
@kashhill this is how I got pulled into a cult when I was a teen, leader was very affirming of even my wildest ideas, ended up being a shared delusion.