AI has accused autistic students of using AI to write papers because of their writing style. We can’t base the expulsion of a person from a platform based on an estimate by AI.
Yes, could be used to screen for potential cases.
Though a study showed that having an AI result strongly biases subsequent human reviewers' judgements.
The study was, from memory, AI evaluating cancers on xrays, with human radiologists following after.
Seeing no reason that similar poisoning of independent human expert judgement would not happen in the situation to hand.
@andyjennings @Susan60
Sorry for the absence of any citation Andrew.
I should have taken more time and found it before posting.
The study appeared on Mastodon about ? 6 weeks ago.
Search engine not currently helping me find the study. Will try again later.
There is an article (apparently absent a direct cite of the journal article. 😞 ) on euronews.
https://www.euronews.com/health/2024/11/21/study-cautions-radiologists-not-to-over-rely-on-ai-tools-for-diagnosis?utm_source=flipboard&utm_medium=activitypub
The orginal article is I think (but haven't checked)
That’s a good point. Would be good to do studies where the humans weren’t first aware of the AI result, or had been given varying results. Maybe that’s what they did.
@Susan60 @andyjennings
There are plenty of other studies that show increased detection rates when AI is used.
It seems to be, as with many things, how it is used that determines the benefits.