Something a bit worrying to note about using Ai in healthcare.

I’ve had two specialist appointments recently, both using ai to transcribe. Both sent report letters with inaccuracies about my diagnoses and past medical history. Even my GP was like, “huh, that directly contradicts what I put in the referrals.”

I have followed up both and requested amendments (which were done) but if I hadn’t, these inaccuracies could have significantly damaged ongoing care, further treatment or insurance claims.

Human error has always been a factor, but both doctors were clearly using the ai software and assuming what it spat out was correct. They made no other notes during the appointments to cross-reference and double check. This is how Very Bad Things can happen.

Two friends have told me in the last week they had similar issues happen. One had an incorrect diagnosis listed before they had a procedure done. The other noted viral not bacterial infection (although they did at least get the medication they needed). I feel like I’m being a pain in the bum going over everything and requesting corrections, but I’m seeing so many mistakes, to the point where any human reading them would immediately say “that doesn’t even make sense”. I worry for those who don’t or aren’t capable of checking these things. Sure, using ai might save the docs 10 minutes per patient in the ER but is that really worth the risks?
@bloodflowersburning Where are doctors using ai like this? And how did you check the transcribes?🤔
@theron29 Genuine question or scepticism? I’m in Aotearoa NZ. Doctors were from two different specialist medical departments. Both used ai software to record consultation and take notes. The report letters sent to my GP contained multiple discrepancies about conditions discussed, and referenced in GP referrals. If they had checked before sending they would have realised mistakes had been made. My GP questioned the content, which was how I became aware. I can provide several specific examples but would rather not on a public forum to a stranger. However, both letters were re-assessed and sent again with corrections on request. Hope this helps.

@bloodflowersburning Genuine question (from central EU). (AI Scepticism is expected to come a bit later on... 🙂 ).
Doctors are not using AI here, yet. I guess this tech had to be certified and tested before it was admitted into doctor's realm?

Although this does not seem to be the worst usecase scenario where&how to use AI, your detailed explanation gives doubts whether the tech is actually ready now for anything *this* important.... 😏

@theron29 Agreed. Not the worst case scenario. For me personally it could have caused issues with further treatment, getting reimbursed by insurance, and caused confusion when needing ongoing care with other providers. So more an avoidable inconvenience and extra paperwork rather than a dangerous outcome in this example. I hope that’s the worst possibility across the board, and that people check their notes carefully to catch any inconsistencies.

Mistakes in medical notes have always happened, unfortunately it’s inevitable. Only time will tell if this becomes more of an issue if/when ai transcription is used in medical settings more frequently and if it generates a higher number of errors as opposed to human note taking. What I think is essential is that we still retain a human buffer to assess factual accuracy, rather than simply assuming (hoping?) the software can do it better.

For more info, the software Heidi AI Scribes has been endorsed for use within Health NZ. https://www.tewhatuora.govt.nz/health-services-and-programmes/digital-health/generative-ai-and-large-language-models#naiaeag-endorsed-tools

Generative AI and Large Language Models - Health New Zealand | Te Whatu Ora

Building the future of health

@bloodflowersburning it's amazing NZ would authorise something worse than simple voice to text transcription for doctor notes. But I'm old school, I still do searches and visit sites like Cleveland for medical guidance.
@Cass_m I suspect NZ has long been a bit of a Guinea pig for these things. Good sample size with a manageable (isolated) but diverse population and a legal framework that often doesn’t fully protect its citizens, makes the data acquisition quite appealing to certain groups.
@bloodflowersburning I how not. That's would make it a hostile government.
@Cass_m it’s a bit of a long-perpetuated idea (myth?) that tech companies often conduct A/B testing on new products here. I can’t speak to the accuracy of that belief, and I’m not convinced it would make economic sense anyway mostly because of the complexities of large-scale research sampling.

@bloodflowersburning

As an ex healthcare person based in the UK, it’s being actively pushed all the time and of course we always implicitly trust a machine so the letters and notes are rarely checked.

The most tech I would use was dictation software for my letters for speed and even then they needed reading after transcription as it didn’t recognise certain terms.

This stuff scares me as decisions are made between professionals based on the contents of communications and I have heard so many tales like yours that we simply don’t know where the errors are creeping in.

The only way is to keep your own notes and records but that can be difficult and exhausting.

Good luck.

@bloodflowersburning Ask GPT to write code for you. Same problem,. forgets or invents context, addresses a different problem, gets fixated on minutiae when the problem is structural. And if you even hint at what you think the Delphi 2.0
will fart out you get "CONGRATULATIONS YOUR A GENIUS" :/
@nzJayZee sadly these are all phrases that I don’t have much frame of reference with, but I’m assuming the tl/dr is basically “ai hallucinates most of the content it spews out and generally makes things worse”?
@bloodflowersburning Yes. In my experience it does make things worse by inventing and obsessing over bad solutions (keeps returning to the same shitty solution to a programming problem) What keeps me up at night is algorithmic cruelty when it's used in stuff like job applications or negotiating social services., "Computer Says No"

@nzJayZee quoted from this article on RNZ: “He said jobseekers were using AI to generate their applications, while employers were using AI to read them.”
The snake is eating its own tail.

https://www.rnz.co.nz/news/business/590746/jobseekers-and-advocates-disturbed-as-companies-screen-applications-with-ai

Jobseekers and advocates disturbed as companies screen applications with AI

Advocates say the use of AI to screen job applications is dehumanising and creates bias.

RNZ
@bloodflowersburning I like the idea of applicant pushback. for something like $40 NZD /mth you can have all the "job application agents" via Claude. Totally agree about the snake.eating its tail . We should be building community resilience instead of data centers IMO.
@nzJayZee careful now, “community” seems to be a dirty word in some circles. Don’t be that radical lefty reminding people to be kind and care for others. 😉
@bloodflowersburning I'd never! The market knows best.
@bloodflowersburning When you help someone with their groceries/stairs/anything or call an ambulance when someone's hurt the most important thing shouldn't be "How am I compensated". David Graeber called this (deliberately provocative) "baseline communism" it's why when 2 people working in a repair shop go "pass me the wrench," ..""ok" instead of entering into a wrench contract
@bloodflowersburning (I know that's not how NZ works, and I feel sad about it)

@bloodflowersburning @nzJayZee

My original take was of a dog ingesting its own excrement.

@bloodflowersburning "your future doctor are studying using chat gpt". I don't think I've ever wanted more for something to collapse than AI.
@JD38 I think most students of all disciplines are now.
@bloodflowersburning yeah, i was just too lazy to write multiple disciplines 😅
@bloodflowersburning
Some of their lecturers are too.
The pharmacy school offers free medication reviews, The lecturer who I saw used ChatGPT to summarise a paper. Isn't that what the abstract is for?
@JD38
@bloodflowersburning I also see this as a HIPAA violation
@MamaLake Unfortunately HIPPA doesn't apply in New Zealand law. But I think it's covered under the Health Information Privacy Code 2020 (HIPC) as Health NZ have authorised the use of specific tools (Heidi AI Scribe) in healthcare.
Stop Gen AI – Mutual Aid and Political Activism

@kimcrawley interesting initiative. Is there any section in particular you’d like me to focus on?

My plan going forwards is to refuse the use of ai when recording medical consultations and to record my own notes (as a disability accessibility need) and keep checking everything for inconsistencies/mistakes.

@bloodflowersburning

We have a mutual aid fund for people who lost their livelihoods, guides to avoiding Gen AI, upcoming support groups for chatbot addicts, all kinds of stuff.

Share our website. Join us. There's lots of things you can do.

Why just let Gen AI's horrors happen, when you can join forces with us and push back?

https://stopgenai.com

Stop Gen AI – Mutual Aid and Political Activism

@bloodflowersburning thanks for the warning. My last couple appointments have used it too and I assumed providers would be double checking for errors, but maybe not. I'll be on the lookout. 🙃
@bloodflowersburning Yikes! That's really bad.
It's a good reminder to always check the notes on record after every appointment.
I think our GP gave us the option to decline use of the AI scribe. That should be the standard for everyone and part of the normal consent process.

@bloodflowersburning god this was inevitable. It cant even narrate a reel on IG without entirely misreading whole words for others. Even official international accounts.
This is terribly dangerous. I hope you email your local goverment representative about this (cc in 'other' party representation in your area so they dont ignore it) and also file your concern with your medical ombudsman.

Thank you for sharing this.

@bloodflowersburning Nothing good will come from AI.
@bloodflowersburning @AngelaPreston a large amount of the research on this has focused on how much time it is saving physicians, and less so on the effort to find and correct the errors it inevitably generates or other unintended consequences. I feel like this article does a good job of highlighting concerns: https://www.nature.com/articles/s41746-025-01895-6
Beyond human ears: navigating the uncharted risks of AI scribes in clinical practice - npj Digital Medicine

Artificial intelligence (AI) scribes have been rapidly adopted across health systems, driven by their promise to ease the documentation burden and reduce clinician burnout. While early evidence shows efficiency gains, this commentary cautions that adoption is outpacing validation and oversight. Without greater scrutiny, the rush to deploy AI scribes may compromise patient safety, clinical integrity, and provider autonomy.

Nature
@bloodflowersburning don't you just love the Murikkkan health care system?

@bloodflowersburning

I'll check, but my understanding is that registered practitioners in Australia have to check that the autotranscribe notes are accurate and then sign that they have done so.

So dangerous if errors are being found and practitioners are not immediately in trouble for having signed the notes as being correct.

#Responsibility

@bloodflowersburning This is going to be a clusterfuck, because the owners of the clinics (in the US, increasingly, that's venture capital) will start budgeting on the basis that the docs no longer need to spend time on notes. So there won't be time to review the AI notes for accuracy.

@bloodflowersburning @davidgerard

Here in the US; I do not like that whenever I have a telehealth appointment, I have to confirm that the provider is not using the "AI" text generator that the telehealth software vendor keeps pushing.

(As opposed to actual speech-to-text, which works better and does not potentially leak everything I say to a corporate server block somewhere.)

@bloodflowersburning 🙄 yeah seeing a lot of this too. I worked in healthcare data (credentialing) and just lost my job to an AI thing that does it, with a team, supposedly. Hate the way all of this is going. It's weird and dangerous
@jake4480 I’m really sorry to hear that.
I used to work for a translation service that supported disabled people. It’s being gradually nudged out by ai services that absolutely cannot do what a human does with accuracy. It frequently translates things incorrectly or in ways that make the information more confusing. But it’s cheaper than human labour, eh.