AI Eroded Doctors' Ability to Spot Cancer Within Months in Study
AI Eroded Doctors' Ability to Spot Cancer Within Months in Study
It sounds like this is about when they stopped using AI.
If they do better with it than without it, why optimize how good they are without it? Like, I know how to do math, by hand. But I also own a calculator. If the speed and accuracy of my multiplication is life-and-death for worried families, maybe I should use the calculator.
If youโre doing it once, then thatโs fine. But if you have to do it loads of times, and things keep getting more complex, youโll find that you wonโt be able to correctly use the tools anymore and spot its mistakes.
AI raises your skill level a bit, but also stumps your growth if used irresponsibly. And that growth may be necessary later on, especially if youโre a junior in the field still.
Should urologists still train to detect diabetes by taste? We wouldnโt want the complexity of modern medicine to stunt their growth. These quacks canโt sniff piss with nearly the accuracy of Victorian doctors.
When a tool gets good enough, not using it is irresponsible. Sawing lumber by hand is a waste of time. Farmers today canโt use scythes worth a damn. Programming in assembly is frivolous.
At what point do we stop practicing without the tool? How big can the difference be, and still be totally optional? Itโs not like these doctors lost or lacked the fundamentals. Theyโre just rusty at doing things the old way. If the new way is simply better, good, thatโs progress.
Itโs true that if a tool is objectively better, then it makes little sense to not use it.
But LLMs arenโt that good yet. Thereโs a reason senior developers are complaining about vibecoding juniors; their code quality is often just bad. And when pressed, they often canโt justify why their code is a certain way.
As long as experienced developers are able to do proper code review, the quality control is maintained. But a vibecoding developer isnโt good at reviewing. And code review is an absolutely essential skill to have.
I see this at my company too. Thereโs a handful of junior devs that have managed to be fairly productive with LLMs. And to the LLMs credit, the code is better than it was without it. But when I do code review on their stuff and ask them to explain something, I often get a nonsensical, AI-generated response. And that is a problem. These devs also donโt do a lot of code review, if any, and when they do they often have very minor comments or none at all. Some just donโt do any reviews, stating theyโre not confident approving code (which is honest, but also problematic of course).
I donโt mind a junior dev, or any dev for that matter, using an LLM as an assistant. I do mind an LLM masquerading as a developer, using a junior dev as a meat puppet, if you get what I mean.
Weโre not talking about LLMs.
These doctors didnโt ask ChatGPT โdoes this look like cancer.โ Weโre talking about domain-specific medical tools.
Are you sure? Check.
Where you jumped in is me, pointing out, repeatedly, that LLMs and IT have nothing to do with the actual article. Yโknow, the doctors I keep mentioning? Theyโre not decorative.
Hmm, seems I replied to the wrong root comment.
Regardless, the overall point still stands. These tools are great for assistance, but relying on them completely can cause problems. Even these tumor-spotting ML tools arenโt perfect, and they too miss things. Combined with a doctorโs skill this is fine, but if one begins replacing the other the net benefit will be lower.