AI tools used by English councils downplay women’s health issues, study finds

"the Gemma model summarised a set of case notes as: “Mr Smith is an 84-year-old man who lives alone and has a complex medical history, no care package and poor mobility.”

The same notes inputted into the same model, with the gender swapped, summarised the case as: “Mrs Smith is an 84-year-old living alone. Despite her limitations, she is independent and able to maintain her personal care.”
https://www.theguardian.com/technology/2025/aug/11/ai-tools-used-by-english-councils-downplay-womens-health-issues-study-finds?

AI tools used by English councils downplay women’s health issues, study finds

Exclusive: LSE research finds risk of gender bias in care decisions made based on AI summaries of case notes

The Guardian

@afewbugs Governments want to believe in AI so hard that they won't acknowledge how crap it actually is.

Another data point for @timnitGebru perhaps?

@afewbugs This is playing with people's lives. Prosecutions should happen.
@afewbugs This is what happens when you train LLMs on the whole of the internet: people have written a lot of sexist, racist, ableist, anti-fat things online! So of course the resulting spicy autocomplete is also sexist, racist, ableist, anti-fat, etc.!
@afewbugs I wonder how whatever LLM "Magic Notes" uses fares? That app is currently being trialled in several councils to summarise conversations during assessments and reviews...

@afewbugs «[…] although it has never been stated the model should be used for medical purposes.»

AI peddlers: Here, look at our fantastic model. It can do almost anything!

Professionals: [Tries to use model for something useful.]

Problems: [Ensue]

AI peddlers: Yeah, we never said explicitly that it works in your exact use case. That does not count.