My work #EMR at now has integrated #AI that summarizes a patient's chart whether I want it to or not. This week it told me the wrong reason for admission, the wrong hospital course, and the wrong medications as compared against the human-written discharge summary. To review it and find the error took 3 minutes; to document the error and report it took another 10.

Anchoring bias exists. What we read stays with us, truth or lie, influencing decisions.

And I can't turn it off.

#LawsuitBait

@jeneralist that’s incredibly depressing to read. Can’t imagine how awful it must be to work “with”

@jeneralist Shit. Do you mind if I ask which EMR it is? I know Epic was working on a new AI feature but the other vendors probably were, too. Perfectly understandable if you don’t want to share.

Context: I work for a company that ingests EMR data for hospital clients, and want to make sure our data team knows this stuff has arrived.

@jmelesky Got it in one!

I don't know whether the AI summary is an Epic built-in or a 3rd party product, though.

@jeneralist Gotcha, thanks! Yeah, Epic has a complicated ecosystem, so not surprising it’s hard to tell where it originated.

Here’s hoping the people making the decisions recognize the limitations of these technologies soon.

@jeneralist

Don't worry, we'll just get another AI to develop defensive medicine and litigation risk management strategies.

@jeneralist And I'm sure the malpractice reviews will also be AI screened =\

@Terminhell @jeneralist

AI insurance agents will eliminate most of the potential malpractice litigants.

@2qx @jeneralist Sadly I think they already are, or wouldn't be surprised.
@jeneralist the coder queries are going to be off the hook
@jeneralist Back when I worked for Epic, a lot of days I felt like I was working at the command of hospital administrators, against the doctors and nurses. We were actively increasing the bureaucratic workload, so that someone sitting on ass all day could hit the golf course an hour earlier.

@log @jeneralist

The thing that got me about EMRs is their core purpose: to enable billing. Everything else is suborned to that, everything.

Even the scheduling UX sucks….

@jeneralist
WoW thanks for sharing that.

Look I feel AI can be a useful at times but it is really far from what some people make it out to be. (It's not HAL 9000)

I use AI as a sounding board to help me to explore new ideas but I challenge the AI's answers all time.
I tell everyone to challenge everything an AI tells you.
Don't blindly trust AI

@jeneralist LLMs are entirely the wrong tech for summarization. an LLM emits "average text". a useful summary tells you what deviates from average
@jeneralist almost can't believe they attached the hallucinating lie machine to things that are this life and death. But then again.. I can
@jeneralist my doctor's records always have the wrong medications no matter how often I try to fix it. So stupid.
@jeneralist but there are a lot of billionaires counting on this being a thing, and they really don’t mind breaking a few eggs (killing some inconsequential people) to make it happen, so SUCK IT UP PRINCESS, DADDY NEEDS A NEW MEGAYACHT!

@jeneralist

"It's new, it's shiney, it's bitchen, and it's super tekkie - let's believe it!"

None of those, are reasons to do so. Consider it one of your students, when asked to diagnose, at best.

@jeneralist

Yeah, i saw a new doctor this week and she asked if it was OK for her to use AI to record notes of our conversation. i told her "no". AI is in no sense ready to be used in a medical setting.

@jeneralist I'm dreading the day I arrive at work to find genAI chewing through notes in my Epic instance like a plague of locusts. I know it will happen. There are plenty of wide-aiyed acolytes in healthcare, especially in management roles, and no one has the guts to say "no".

I had myriad reasons not to trust charts pre-#AIslop (pre-EMR, even). An increasing number of clinicians use LLMs to take dictation and write notes (and how can patients truly give informed consent to that‽), adding hallucination to human cognitive bias and plain-old malpractice. But the only time I see healthcare orgs invest in systematic chart error-correction is when it comes to coding for billing.

#healthcareIT

@ozdreaming Last year I went to a European family med conference in Dublin. Someone saw my ID badge that said USA and came to talk to me about AI. She was based somewhere in eastern Europe. She couldn't keep up with writing notes for the number of patients she was seeing and thought that since I came from the country with Silicon Valley I must already have AI to write notes. I wanted to explain that because I used to be a programmer I was the one in my office trying to hold it against it.

@jeneralist

I recently came from the Doctor's and she asked me if I minded her using an ai to summarize our appointment. I said I'd rather not and that it would also save her time in not having to proofread the ai's interpretation of what went on.
She laughed 😄 & agreed.

@labbatt50 @jeneralist for me at least a technology that artificially pushes energy and resource usage up in the midst of a climate crisis we are ignoring even in this discussion has no legitimacy at all. Whose lives are being saved by this technology? it's ecological footprint kills already, and BTW, as a person teaching students that are outsourcing their thinking to AI already, they are also destroying anything resembling teaching or learning to think.

@marion_grau @jeneralist

Quote:
"tudents that are outsourcing their thinking to AI" ... Will never comprehend the true meaning of real commerce. Not just profits but to provide actual services for clients.

@marion_grau @labbatt50 @jeneralist A balanced perspective of benefit/risk is a must, complicated by the pressure amongst corpo-giants to win the race driven by profits, is a killer. What’s the rush, the threat? $$$$$$$$$ that fogs the brain of typical humans causing them to lose sight of the big picture or distort their reality to such an extent, that in their mind the risk is worth the gamble. The species, civilization becomes defacto expendable.
@labbatt50 I let my veterinarian do it, because I wanted to see the process in action -- plus, I wasn't worried about describing my cat's social history on a recording that would get scraped by AI. Catnip isn't an illegal substance.
@jeneralist @labbatt50 I propose there maybe a place for AI if it’s results are verified, those bugs can be fixed. That it’s not used for profits at the wholesale expense of jobs, unless we reconfigure our social structure to mitigate, including especially a capping of wealth. Corporation and individual profits must be neutered/capped, eliminating excesses in individual hoarding of resources. A majority would have to agree and move on this quickly to save ourselves.
@labbatt50 @jeneralist In certain applications, in my experience AI can be amazing such as photo editing, especially for amateur photo editors. But for other applications, AI can’t be trusted. My son who works in aviation safety, says pulled up citations of regulations frequently cited are out of date or inaccurate. So if used, it still requires research to verify. Accuracy can’t be trusted. Our tech geniuses say the bugs will be worked out. 1:2
@labbatt50 @jeneralist Additionally, the real threat of AI in out capitalist system: 75% of the jobs from a tech standpoint were already risk from automation. AI just exacerbates. A great example is pilots, my background. We could easily be replaced before AI came along. The automation tech exists to program airliners to taxi out, takeoff, fly the route, land and park at a gate. Look at military drones. Now whether customers would fly them is a different question. 🙃
@jeneralist I got handed an AI care plan that confused hyperparathyroidism for hyperthyroidism 🤦‍♂️

@jeneralist
Last week I had to create an IT service request ticket. I summarized the issue into 3 sentences describing what happens and what we tried so far.

Our ticket system now has integrated #AI that summarized my 3 sentence summary into 3 sentences describing basically the same but vague, leaving out important details. Based on this the ticket was routed into the wrong team. A week later they replied, asking for details. I copy-pasted my 3 sentence summary. They routed the ticket into the right team which then replied that the issue is known and work in progress and linked it to the issues master ticket.

Instead of having AI doing this in an automation, it sabotaged the work, made human interaction necessary and used a shitload of energy on top. Well, at least it costed us only money and no human live in the process...

@jeneralist and how many of your coworkers and nurses, overworked and underresourced, are just reading the summaries without verifying
@jeneralist I think everyone learns this when a loved one goes into the hospital, but for me most recently it was my grandmother: a family member absolutely has to be there, and be hyper vigilant. The staff simply does not have the time to care about details, and they don't communicate. Someone can swear up and down that they've written something in the chart and either they lied or the night shift didn't read it (or in your case, read a slop "summary" that left out what was important). You won't get good care without a patient advocate.
@aburka I was hospitalized for weeks as a child, and a parent was with me 24/7 except for bathroom and food breaks. The hospital set up beds for patients' parents in the solarium, so that they'd still be close even if they couldn't sleep in the room with their kids. (Long enough ago that the hospital had a sunroom!) Eventually I realized that was about more than just keeping me company.

@jeneralist Wikipedia on the anchoring effect, for anyone who doesn't feel like searching: https://en.wikipedia.org/wiki/Anchoring_effect

It's troubling that even when you're aware that something is unreliable AI slop, it still changes your behavior. The only safe move is to avoid seeing the slop in the first place, and that's increasingly difficult.

Anchoring effect - Wikipedia

@jeneralist Gods that is mortifying. Best of luck fighting it. May end up saving more lives to go on strike (if you can) than to try and go along with it.