Drugs and medical devices have to be put through long periods of rigorous testing and multiple stages of trials before they can be used on patients in practice.
But not #genAI / #LLMs! Why not?! [1]
I posted an article a few months ago about how doctors are already using LLMs in their practice right now, without any testing or trials, just marketing hype and wishful thinking.
The LLM creates summaries of patient records so the doctor doesn't have to bother to look at the records themselves. Then the doctor records the visit with their smartphone and an LLM transcribes and summarizes it. So the doctor is neither reading nor writing the actual medical records.
I predicted that people are going to be seriously harmed or die from the resulting inaccuracies (slop) in medical records and doctors' lack of knowledge of patient histories. And we won't find out about it until after it's been happening for a while [2].
Today I've seen several posts #onHere about a similar situation in the social work field. The link ICYMI:
https://www.theguardian.com/education/2026/feb/11/ai-tools-potentially-harmful-errors-social-work
The sad thing is the use of properly developed actual AI (what "AI" meant before the tech broligarchs came along and appropriated the term for their text generators) can do great things like detect cancer in medical scans, find useful new drugs, etc.
-----
[1] you guessed it: 'profit'
[2] no room to go into all the reasons here, just think about it
I found this jam because someone put it on #fediTV the shared playlist with 100s of videos that is updated every day with stuff added by hand by people #onhere
Click the #fediTV hash tag to find out how to find the list and add your own finds.
