Arghh - more problematic reporting, this time about robo-therapists.
A thread:
/1
Arghh - more problematic reporting, this time about robo-therapists.
A thread:
/1
For the first ~1500 words, exactly 0 people with expertise in psychotherapy are quoted.
/2
They talk up the idea that this is effective because people are more willing to open up to a "bot" than a real person. BUT WHAT IS HAPPENING TO THAT DATA?
(This finally comes up 1000 words further down the article.)
/3
The only studies cited are co-authored by the companies selling this crap.
One of the supposedly positive findings is that people form a "therapeutic alliance" with the bots within "just five days". Not sure how that is measured, but also what happens when the bot can't follow through on what a therapeutic alliance is supposed to be?
/4
When the author finally gets around to reporting on what **actual psychologists** have to say, it's introduced with "What do old-school psychoanalysts and therapists make of their new 'colleagues'?"
This frames the bots as human analogous ("colleagues", ugh) and the actual humans with the relevant expertise as behind-the-times ("old school").
/5
“What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?” YIKES
This is a completely unrealistic expectation about what goes into verifying that kind of note and sounds like a recipe for overburdening the medical workforce/setting up errors.
/6
What if -- instead of seeing the process of creating clinical documentation as mere busywork -- the tech bros understood it as possibly part of the process of care?
What if -- instead of leading with the 'gee whiz AI' angle -- journalism in this space started with privacy harms, the fact that somehow tech companies get away with pretending healthcare regulation doesn't apply to them, and chatbots urging self-harm?
/fin
p.s. We covered robo-therapy on Mystery AI Hype Theater 3000 back in September, with Hannah Zeavin:
Emily and Alex talk to UC Berkeley scholar Hannah Zeavin about the case of the National Eating Disorders Association helpline, which tried to replace human volunteers with a chatbot--and why the datafication and automation of mental health service...
@emilymbender
When Joseph Weizenbaum created Eliza in the 1960s, this nonsense was published:
Colby, K. M., Watt, J. B., & Gilbert, J. P. (1966). A Computer Method of Psychotherapy: Preliminary Communication.
In The Journal of Nervous and Mental Disease, 142, S. 148-152.
1/2
“ […] several hundred patients an hour could be handled by a computer system designed for this purpose. The human therapist […] would become a much more efficient man since his efforts would no longer be limited to the one-to-one patient-therapist ratio as now exists.”
This prompted Weizenbaum to write "Computer Power and Human Reason", a book that is soooo important today.
2/2
@emilymbender What if, instead of building whole hospitals with lots of complicated machines and different licensure requirements for different people, we just train up a bunch of medical examiners and buy some freezers?
- a nurse who's worked in software for a long time
As a security guy, you are definitely asking all the right questions. I suspect you will get no good answers, or at least no positive ones. Any company creating these models will at minimum be using feedback to improve them, and almost all of them are essentially just data mining organizations anyway.
As a person who goes to therapy, this all sounds terrifying. Betterhelp and their ilk gig economy-ing therapy was bad enough.
@emilymbender journalism has already been replaced by AI, for all practical purposes. AI Techbros are banging their own drums in the media.
The remaining niches where manual journalism still prevails has little influence on public debate..