Tomorrow (Wednesday), @NPR's 1A is talking "A.I." in healthcare.
The most persistent problems of "A.I." in medical care are the problems of "A.I." in everything else: Prejudicial biases in the training data which aren't counterweighted in the model design.
As I and others have noted many times before, prejudicial beliefs and assumptions about disabled and neurodivergent people, BIPOC populations, women, and LGBTQIA+ people have formed the basis of the data used to train "A.I." health systems, and we know those prejudices get reproduced.
And frankly, when it comes to the idea of adding GPT in healthcare, the possibility of being subjected to a system built out of & trained on large rafts of the "natural language" people on the internet tend to use to talk about the needs of BIPOC, Disabled, LGBTQ, and Femme folx sounds utterly horrifying.
Automated assessment tools read women's and Black people's pain incorrectly; disability assessment systems are improperly calibrated based on old presuppositions about disabled lived experience, resulting in negative health outcomes and negative impacts on benefits allocations; and "A.I." mental health tools are can have higher proportions of adverse outcomes for neurodivergent populations.
Now imagine integrating a) that system which reliably told the ethnicity of patients from x-rays, b) that system which consistently misdiagnosed signs of illness in Black patients, and c) a GPT chat system— a system trained on examples of how Doctor's tend to talk about Black patients— to relay diagnoses.
Horrifying.
Now these realities can be addressed, but it must be done by bringing large groups of the intended subjects in as consenting partners in the design, training, and use of these systems; but this idea— placing marginalized populations at the center and helm of "A.I." research directions— is still too infrequently, inconsistently, or halfheartedly applied.
Until we fix that, nothing like equitable healthcare will come from just slapping some "A.I." on it.