116 Followers
103 Following
92 Posts
King's College London computer scientist, research in #MachineLearning #AI #FairAI #MedicalAI for #MedicalImaging, teacher, author, father of 2, former camel owner, imposter (he/him)
KCL WWWhttps://kclpure.kcl.ac.uk/portal/andrew.king.html
Group WWWhttp://kclmmag.org/
Orcidhttps://orcid.org/0000-0002-9965-7015
BTW this is not a paper with a nice and simple conclusion - TBH we ended up generating more questions than we answered 😀 - but we hope researchers will find this work useful and that it will help the process of developing useful and trustworthy AI for medical applications.
This is what we discovered:
- It's not clear what is the best way to measure uncertainty calibration ... methods that perform well by one metric can perform badly by another
- Uncertainty-aware training seems to improve uncertainty calibration, and sometimes even accuracy
- Using measures of calibration for model selection could be a promising way to make models more clinically useful
In this context the issue of uncertainty becomes important, and it is crucial that model estimates of uncertainty are meaningful - in reality most SOTA AI models are very bad at this, i.e. their 'uncertainty calibration' is poor.
There are many great papers published every year on new AI models for classification problems in medical imaging such as diagnosis. But not so much attention is given to the problem of how these models will actually be used in practice. We believe that many classification models for medical applications will be used in a decision support setting ...

Out now in Medical Image Analysis:

https://www.sciencedirect.com/science/article/pii/S1361841523001214

We investigate the concept of 'uncertainty-aware training' with the aim of improving the uncertainty calibration (i.e. the relationship between model accuracy and confidence) of medical classifiers.

#ai #deeplearning #uncertainty #calibration #medicalimaging #diagnosis

https://www.popsci.com/technology/ai-warning-critics/

“Don’t be fooled: it’s self-serving hype disguised as raising the alarm,” says @dylan, a research engineer at @DAIR,...Speaking with PopSci, Baker went on to argue that the current discussions regarding hypothetical existential risks distract the public and regulators from “the concrete harms of AI today.” Such harms include “amplifying algorithmic harm, profiting from exploited labor and stolen data, and fueling climate collapse with resource consumption.”

Big Tech's latest AI doomsday warning might be more of the same hype

On Tuesday, a group including some of AI's leading minds proclaimed that we are facing an 'extinction crisis.'

Popular Science

There's yet another "AI will kill us all! It poses a risk of extinction!" letter going around, and I just… Y'all i am just so fucking tired.

CAPITALISM poses risk of extinction (climate change, right the fuck now).

WHITE SUPREMACY poses risk of extinction (genocide, eugenics).

HEGEMONY poses risk of extinction (nuclear FUCKING WAR).

And whatever "risk of extinction" "AI" poses, it poses because it is BUILT FROM THOSE EXTREMELY HUMAN VALUES.

Even if you stopped every "AI" project running, RIGHT THIS SECOND, those values would still kill us. And no matter how long you "pause" your "AI" projects, if you don't address those values? Then when you start your "AI" back up? You'll KEEP BUILDING THOSE SAME VALUES IN.

This is not hard. At this point, as much as it pains me to say it, it's not even novel. And yet you're still not fucking getting it.

I'm so goddam tired.

🎙️👏We are pleased to announce that 👩‍💻 Dr. Judy Gichoya
will be the keynote speaker 🔥 at our #FAIMI: Fairness of AI in Medical Imaging workshop at #MICCAI2023!

👉Check out: https://faimi-workshop.github.io/2023-miccai/

Fairness of AI in Medical Imaging

MICCAI 2023 Workshop

FAIMI

📝 Read our call for papers here: https://faimi-workshop.github.io/2023-miccai/

#MedicalImaging #MachineLearning #AI #FairAI

If you have any questions please reach out to us via [email protected]!

3/3

Fairness of AI in Medical Imaging

MICCAI 2023 Workshop

FAIMI

#FAIMI is a series of workshops, incl. virtual ones! Selected papers from the #MICCAI2023 workshop will be presented at our virtual one on Nov 6, 2023.

Organized by Aasa Feragen, @AtoAndyKing, @benglocker, Daniel Moyer, @eferrante, @ipet, Esther Puyol, @melanieganzben1, @DrVeronikaCH
2/3