My work #EMR at now has integrated #AI that summarizes a patient's chart whether I want it to or not. This week it told me the wrong reason for admission, the wrong hospital course, and the wrong medications as compared against the human-written discharge summary. To review it and find the error took 3 minutes; to document the error and report it took another 10.

Anchoring bias exists. What we read stays with us, truth or lie, influencing decisions.

And I can't turn it off.

#LawsuitBait

@jeneralist I'm dreading the day I arrive at work to find genAI chewing through notes in my Epic instance like a plague of locusts. I know it will happen. There are plenty of wide-aiyed acolytes in healthcare, especially in management roles, and no one has the guts to say "no".

I had myriad reasons not to trust charts pre-#AIslop (pre-EMR, even). An increasing number of clinicians use LLMs to take dictation and write notes (and how can patients truly give informed consent to that‽), adding hallucination to human cognitive bias and plain-old malpractice. But the only time I see healthcare orgs invest in systematic chart error-correction is when it comes to coding for billing.

#healthcareIT

@ozdreaming Last year I went to a European family med conference in Dublin. Someone saw my ID badge that said USA and came to talk to me about AI. She was based somewhere in eastern Europe. She couldn't keep up with writing notes for the number of patients she was seeing and thought that since I came from the country with Silicon Valley I must already have AI to write notes. I wanted to explain that because I used to be a programmer I was the one in my office trying to hold it against it.