RE: https://mastodon.social/@cslinuxboy/116225578585237555

This is not “normal”. Skilled and productive people falling heads over heels over a word predictor and thinking it’s sentient is not normal. This is not the kind of effects that normal automation tools do. This is WEIRD and NOT NORMAL.

Something strange is happening here. This “tool” is modifying well-adjusted human brains in a strange (and I think destructive) way, and we’re seeing it happen to some prominent person every week. How many people are being taken in that we don’t even hear about?

#ai

@drahardja

All COVID infections (even low symptom and asymptomatic) cause brain damage, resulting in reduced impulse control, reduced higher cognitive functioning, lower empathy scores, and worse short-term memory retention.

On average, every American has been infected 5 times. Each time causes more damage, cumulatively, because of the nature of SARS-CoV-2.

I'd be surprised if this phenomenon wasn't playing a role in these people being "abnormally" vulnerable to these averaging machines.

@johnzajac @drahardja wouldn't lower empathy mean less likely to connect to an LLM?

@jadedtwin @drahardja

Is empathy what connects you to an obviously fake simulacrum of a human mind? Interesting take on empathy.

@johnzajac @drahardja Even without COVID, 1/4 of people in the EU had mental health issues at least once in their lives.

So I wouldn't rule out Kent just being more vulnerable to this.

However, the vulnerable people should not have to suffer these effects of LLMs. There should be a system of protections in place, just like we have with allergens.

@emilis @drahardja

Sure; COVID doesn't "give" you a mood disorder, nor is its brain damage uniform.

Rather, it cumulatively increases your risk factors for developing a mood disorder, and causes brain damage commensurate with both your existing risk factors (including former COVID infection) and a measure of luck.

An irony of "protect the vulnerable" rhetoric is that COVID's main action is making people *more* vulnerable to everything from psychosis to kidney failure to thrombosis.

@emilis @drahardja

It's worth noting that all the evidence we have points to each COVID infection being *significantly* worse for you than the last one. So when you're 5 infections in, you're exponentially worse off than you were after one.

Basically the social, economic, and health effects of repeated COVID infections are getting worse over time and for those of us who are grounded in the science, not the vibes, this was expected.

@drahardja fwiw and re:'not normal' - it might be perfectly normal for people to develop exogenous psychosis as a result of overuse of this technology, or for certain individuals to be at risk of.
@nf3xn I was using “normal” to refer to the tool, not the human: tools don’t normally do this to their users. But I 100% agree with you that it’s normal for *humans* to respond this way to such tools at some measurable rate, as evidenced by the constant drip of news about prominent people who do.
@drahardja Social media apps being a prime example. There is an epidemic of teen suicide caused by bullying on Meta products - snapchat, whatsapp, insta and I absolutely think they should be held responsible for it, which means the same as if Zuckerberg Railways killed 100 people in a train derailment via negligence. Jail CEO. You can't remove E2EE from your product and then claim not to know what is going on. They have even less excuse now because they have their own Llama AI to tell them.

Yes-but-and, Fiction does something like this to some people; fiction has represented that in _Laura_, _Crime and Punishment_, etc. Chat interfaces sure speed it up.

@drahardja @nf3xn

@drahardja we equate speech with consciousness and that's dangerous. Anything with neurons is conscious to some degree (may be possible in plants but I ain't no scientist), but for some reason we seem to think our complex grammar creates it, rather than it being a byproduct of our complex social structure, bipedalism freeing our hands to hold tools, and millions of years of complex tool use and strategy to take down megafauna.

We have gotten to the point where we can simulate 1(!!!!) aspect of our intelligence: coherent grammar. Not logic. Not consciousness. Not even synthesis of knowledge. Just a sentence that is technically grammatically correct. That's it. Our pattern recognizing brains aren't meant for the world we built and we're constantly giving ourselves false positives (a useful advantage for leopards in trees, false positive means we live, false negative not so much).

People falling for this were always already susceptible to this form of false positive. The feedback loop is what drives the "psychosis".

@drahardja I dunno why you say they're "well adjusted" but like @jadedtwin says, it is pretty normal for our brains to want to attribute sentience and intelligence to well written text. There have been studies along these lines for years.

See also people believing that the Eliza program was a real human decades ago.

(for those who don't know Eliza just restates whatever you say as a question to you)

@drahardja but it's not just a word predictor. the attention layers can learn structures implicit in their data, for example grammatical rules or internal representations of maps, like actual geospatial internal representtions and actual elements of grammar, for instance one part picks out some structure in grammar, then the next attention layer another, and then the next the other, and so on.

@drahardja @m4ra @cslinuxboy i think artificial intelligence is mostly making us aware on how little we still know about human intelligence*

(* which to those paying attention, behavioural economics have been shading a light for years)

@drahardja Poor guy. This stuff is so toxic. ☹️

As @baldur said

"Self-experimenting with psychological hazards is always a bad idea"

https://www.baldurbjarnason.com/2025/trusting-your-own-judgement-on-ai/

Trusting your own judgement on ‘AI’ is a huge risk

Writing at the end of the world, from Hveragerði, Iceland

@drahardja It's particularly due to LLMs' extremely sycophantic nature - mainly designed to maintain user retention over everything else.

There is a great video on YouTube covering this exact function of LLMs by Eddy Burback -
https://www.youtube.com/watch?v=VRjgNgJms3Q
ChatGPT made me delusional

YouTube
@burger This was an incredible watch.
@drahardja Do filesystems attract vulnerable people?
@drahardja I don't think people know kent for being well adjusted even before llms 😅
@drahardja I think being weird is a requirement to do anything Linux kernel related 😂 truth is that a lot of genius nerds just lack social competence, no wonder they will think that way. @cslinuxboy
@drahardja having known some file system guys, some of them are not well adhusted at all