Arghh - more problematic reporting, this time about robo-therapists.

https://www.theguardian.com/lifeandstyle/2024/mar/02/can-ai-chatbot-therapists-do-better-than-the-real-thing

A thread:
/1

‘He checks in on me more than my friends and family’: can AI therapists do better than the real thing?

It’s cheap, quick and available 24/7, but is a chatbot therapist really the right tool to tackle complex emotional needs?

The Guardian

For the first ~1500 words, exactly 0 people with expertise in psychotherapy are quoted.

/2

They talk up the idea that this is effective because people are more willing to open up to a "bot" than a real person. BUT WHAT IS HAPPENING TO THAT DATA?

(This finally comes up 1000 words further down the article.)

/3

The only studies cited are co-authored by the companies selling this crap.

One of the supposedly positive findings is that people form a "therapeutic alliance" with the bots within "just five days". Not sure how that is measured, but also what happens when the bot can't follow through on what a therapeutic alliance is supposed to be?

/4

When the author finally gets around to reporting on what **actual psychologists** have to say, it's introduced with "What do old-school psychoanalysts and therapists make of their new 'colleagues'?"

This frames the bots as human analogous ("colleagues", ugh) and the actual humans with the relevant expertise as behind-the-times ("old school").

/5

“What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?” YIKES

This is a completely unrealistic expectation about what goes into verifying that kind of note and sounds like a recipe for overburdening the medical workforce/setting up errors.

/6

What if -- instead of seeing the process of creating clinical documentation as mere busywork -- the tech bros understood it as possibly part of the process of care?

What if -- instead of leading with the 'gee whiz AI' angle -- journalism in this space started with privacy harms, the fact that somehow tech companies get away with pretending healthcare regulation doesn't apply to them, and chatbots urging self-harm?

/fin

p.s. We covered robo-therapy on Mystery AI Hype Theater 3000 back in September, with Hannah Zeavin:

https://www.buzzsprout.com/2126417/13544940-episode-13-beware-the-robo-therapist-feat-hannah-zeavin-june-8-2023

Episode 13: Beware The Robo-Therapist (feat. Hannah Zeavin), June 8 2023 - Mystery AI Hype Theater 3000

Emily and Alex talk to UC Berkeley scholar Hannah Zeavin about the case of the National Eating Disorders Association helpline, which tried to replace human volunteers with a chatbot--and why the datafication and automation of mental health service...

Buzzsprout
@emilymbender what if I just made people pay me to let them hear what they wanted AT SCALE?

@emilymbender
When Joseph Weizenbaum created Eliza in the 1960s, this nonsense was published:
Colby, K. M., Watt, J. B., & Gilbert, J. P. (1966). A Computer Method of Psychotherapy: Preliminary Communication. 
In The Journal of Nervous and Mental Disease, 142, S. 148-152.

1/2

@emilymbender

“ […] several hundred patients an hour could be handled by a computer system designed for this 
purpose. The human therapist […] would become a 
much more efficient man since his efforts 
would no longer be limited to the 
one-to-one patient-therapist ratio 
as now exists.”

This prompted Weizenbaum to write "Computer Power and Human Reason", a book that is soooo important today.

2/2

@emilymbender What if, instead of building whole hospitals with lots of complicated machines and different licensure requirements for different people, we just train up a bunch of medical examiners and buy some freezers?

- a nurse who's worked in software for a long time

@emilymbender I am surprised this isn't illegal. Most jurisdictions regulate the practice of psychotherapy (and similar forms of therapy). It would surprise me if all the engineers on this project are licensed to practice. So... Isn't this practicing without a license?
@oborosaur @emilymbender Since when have tech bros cared about legality? Operating unauthorised taxi services, taking music from The Pirate Bay to seed their streaming service, just to not name two gigantic tech bro corporations.

@emilymbender

As a security guy, you are definitely asking all the right questions. I suspect you will get no good answers, or at least no positive ones. Any company creating these models will at minimum be using feedback to improve them, and almost all of them are essentially just data mining organizations anyway.

As a person who goes to therapy, this all sounds terrifying. Betterhelp and their ilk gig economy-ing therapy was bad enough.

@emilymbender I know of a team at a mental health tech company whose (male) leadership is considering using gen AI for a note generating tool and I and another (female) eng have _tried_ to impress upon them how harmful that could be and that they should absolutely not. They acted like they're listening but I can't help thinking that it's only for show and they're going to do it anyway. (Gender mentioned bc it'd be even more par for the course given that.)
@emilymbender ...yup, they're doing it anyway. I'd name them if I didn't fear retaliation, because they deserve to be shamed to hell for this.
@emilymbender "What if... the techbros understood..." I've found the problem.

@emilymbender journalism has already been replaced by AI, for all practical purposes. AI Techbros are banging their own drums in the media.

The remaining niches where manual journalism still prevails has little influence on public debate..