I often discuss in therapy the problems we face in #FOSS w/ #LLM-backed #AI (no surprise there).

My therapist told me today that one of her colleagues who was early in focusing on LGBTQIA+ therapy is ending their 20 year practice. In their top 3 reasons? AI.

My therapist also noted that she appreciates that I'm now one of her few patients who doesn't come to her with “Well, I asked AI & it said…” slop.

LLMs may have value for medical uses, but warmed over Eliza does not a therapist make.

My therapist had not heard of Eliza. When I explained, she immediately pointed out the trick: mirroring what someone is saying is powerfully validating. It's a tool therapists use to make patients comfortable and feel heard, and it helps build rapport.

But again, it is not in itself therapy.

I suppose some might conclude we are better off with BigTech as therapists rather than Real Humans, & I suppose we aren't too far from USA health insurers refusing to cover human therapy.
We should resist.

@bkuhn Tangential, but striking to me: if a person takes the time to reflect something back, even imperfectly, the understanding is real. For ai it's the opposite.

@ptvirgo

Yup.

#BigTech is ready to sell us EaaS: #Empathy as a service.

My bigger worry is all these #LLM-backed #AI “therapy solutions” surely have nasty terms of service that will curtail the class action lawsuits that should follow when we figure out how much harm they've caused patients.

@bkuhn
Like everything else an LLM does, it superficially resembles what humans do, and non-experts maybe unable to tell the difference directly, but it is ultimately hollow.

@bkuhn Slightly outdated, but I often think about this article from a couple of years back, likening LLMs to fortune telling (which is also basically unlicensed therapy).

https://softwarecrisis.dev/letters/llmentalist/

The details have shifted a bit since they wrote it, but the core idea about asking the audience to carry the real load while faking depth certainly hasn't.

The LLMentalist Effect: how chat-based Large Language Models rep…

The new era of tech seems to be built on superstitious behaviour

Out of the Software Crisis

@bkuhn
> My therapist told me today that one of her colleagues who was early in focusing on LGBTQIA+ therapy is ending their 20 year practice. In their top 3 reasons? #AI.

Can you (did your therapist) elaborate on the sequence there? I can imagine various ways “AI” might be attributable for ending a therapist career, but what was the connection in this case?

@bignose

See my next reply in the thread for more.

As I understood it, basically people are choosing to talk to bots and report it is helping, so they are going to human therapy less.

But it isn't therapy, it's a trick that LLMs pull on us all the time.

That's horrifying @bkuhn, given what we know of how LLMs operate and the absence of a mind there.

It's bad when programmers are abdicating responsibility. It's worse when an LLM demonstrates how awful Google search has become, by comparison. But seeking therapy from a random-sentence generator? I would not have imagined it.

That it has become so prevalent a therapist decides their career may as well end? We are losing the institutions that can actually help people.