Arghh - more problematic reporting, this time about robo-therapists.

https://www.theguardian.com/lifeandstyle/2024/mar/02/can-ai-chatbot-therapists-do-better-than-the-real-thing

A thread:
/1

‘He checks in on me more than my friends and family’: can AI therapists do better than the real thing?

It’s cheap, quick and available 24/7, but is a chatbot therapist really the right tool to tackle complex emotional needs?

The Guardian

For the first ~1500 words, exactly 0 people with expertise in psychotherapy are quoted.

/2

They talk up the idea that this is effective because people are more willing to open up to a "bot" than a real person. BUT WHAT IS HAPPENING TO THAT DATA?

(This finally comes up 1000 words further down the article.)

/3

The only studies cited are co-authored by the companies selling this crap.

One of the supposedly positive findings is that people form a "therapeutic alliance" with the bots within "just five days". Not sure how that is measured, but also what happens when the bot can't follow through on what a therapeutic alliance is supposed to be?

/4

When the author finally gets around to reporting on what **actual psychologists** have to say, it's introduced with "What do old-school psychoanalysts and therapists make of their new 'colleagues'?"

This frames the bots as human analogous ("colleagues", ugh) and the actual humans with the relevant expertise as behind-the-times ("old school").

/5

“What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?” YIKES

This is a completely unrealistic expectation about what goes into verifying that kind of note and sounds like a recipe for overburdening the medical workforce/setting up errors.

/6

What if -- instead of seeing the process of creating clinical documentation as mere busywork -- the tech bros understood it as possibly part of the process of care?

What if -- instead of leading with the 'gee whiz AI' angle -- journalism in this space started with privacy harms, the fact that somehow tech companies get away with pretending healthcare regulation doesn't apply to them, and chatbots urging self-harm?

/fin

@emilymbender
When Joseph Weizenbaum created Eliza in the 1960s, this nonsense was published:
Colby, K. M., Watt, J. B., & Gilbert, J. P. (1966). A Computer Method of Psychotherapy: Preliminary Communication. 
In The Journal of Nervous and Mental Disease, 142, S. 148-152.

1/2

@emilymbender

“ […] several hundred patients an hour could be handled by a computer system designed for this 
purpose. The human therapist […] would become a 
much more efficient man since his efforts 
would no longer be limited to the 
one-to-one patient-therapist ratio 
as now exists.”

This prompted Weizenbaum to write "Computer Power and Human Reason", a book that is soooo important today.

2/2