Something a bit worrying to note about using Ai in healthcare.

I’ve had two specialist appointments recently, both using ai to transcribe. Both sent report letters with inaccuracies about my diagnoses and past medical history. Even my GP was like, “huh, that directly contradicts what I put in the referrals.”

I have followed up both and requested amendments (which were done) but if I hadn’t, these inaccuracies could have significantly damaged ongoing care, further treatment or insurance claims.

Human error has always been a factor, but both doctors were clearly using the ai software and assuming what it spat out was correct. They made no other notes during the appointments to cross-reference and double check. This is how Very Bad Things can happen.

Stop Gen AI – Mutual Aid and Political Activism

@kimcrawley interesting initiative. Is there any section in particular you’d like me to focus on?

My plan going forwards is to refuse the use of ai when recording medical consultations and to record my own notes (as a disability accessibility need) and keep checking everything for inconsistencies/mistakes.

@bloodflowersburning

We have a mutual aid fund for people who lost their livelihoods, guides to avoiding Gen AI, upcoming support groups for chatbot addicts, all kinds of stuff.

Share our website. Join us. There's lots of things you can do.

Why just let Gen AI's horrors happen, when you can join forces with us and push back?

https://stopgenai.com

Stop Gen AI – Mutual Aid and Political Activism