Something a bit worrying to note about using Ai in healthcare.
I’ve had two specialist appointments recently, both using ai to transcribe. Both sent report letters with inaccuracies about my diagnoses and past medical history. Even my GP was like, “huh, that directly contradicts what I put in the referrals.”
I have followed up both and requested amendments (which were done) but if I hadn’t, these inaccuracies could have significantly damaged ongoing care, further treatment or insurance claims.
Human error has always been a factor, but both doctors were clearly using the ai software and assuming what it spat out was correct. They made no other notes during the appointments to cross-reference and double check. This is how Very Bad Things can happen.


