I often discuss in therapy the problems we face in #FOSS w/ #LLM-backed #AI (no surprise there).

My therapist told me today that one of her colleagues who was early in focusing on LGBTQIA+ therapy is ending their 20 year practice. In their top 3 reasons? AI.

My therapist also noted that she appreciates that I'm now one of her few patients who doesn't come to her with “Well, I asked AI & it said…” slop.

LLMs may have value for medical uses, but warmed over Eliza does not a therapist make.

My therapist had not heard of Eliza. When I explained, she immediately pointed out the trick: mirroring what someone is saying is powerfully validating. It's a tool therapists use to make patients comfortable and feel heard, and it helps build rapport.

But again, it is not in itself therapy.

I suppose some might conclude we are better off with BigTech as therapists rather than Real Humans, & I suppose we aren't too far from USA health insurers refusing to cover human therapy.
We should resist.

@bkuhn Tangential, but striking to me: if a person takes the time to reflect something back, even imperfectly, the understanding is real. For ai it's the opposite.

@ptvirgo

Yup.

#BigTech is ready to sell us EaaS: #Empathy as a service.

My bigger worry is all these #LLM-backed #AI “therapy solutions” surely have nasty terms of service that will curtail the class action lawsuits that should follow when we figure out how much harm they've caused patients.