In the same way as "the cloud is just someone else's computer" - "AI decisions are just someone else's prejudices".
@Loukas That's what labelling is about - AI decisions (eg about whether a credit card transaction looks dodgy) are based on the "prejudice" of actual factual determinations. You might regard the victim of a fraud reporting it as a fraud to be prejudiced against fraud, but that's about as far as you can go with "prejudice".
@TimWardCam @Loukas Almost -- AI decisions are based on the "prejudice" of determinations the labelers made, whether they're factual or not.
@bigfishrunning @Loukas "Prejudice" is about "pre-judging" something that hasn't happened yet. Labelling is about recording something that has happened.
@TimWardCam @Loukas does that mean that every labeller is an unbiased observer, and only labels things factually? Seems kind of incredible to me.
@bigfishrunning @Loukas I was trained as a mathematician, and it only takes one counter-example to disprove "every", so I never (ha ha!) claim "every".

@bigfishrunning @Loukas But fraud detection is about stats, not about "every". You're trying to do two things:

(a) be good enough at detecting actual fraud to get regulatory approval that you're trying hard enough

(b) keep false positives, and thus the cost of call centres and pissed-off customers, down.

So target driven in both directions.

@TimWardCam @Loukas fraud detection is only a very small part of the pie; the original thesis is that the output of a given AI system carries the Biases (perhaps overzealously using the word "Prejudices") of the input data and the imperfect-by-definition human labelers; I think that thesis holds true.
@bigfishrunning @TimWardCam @Loukas this reminded me of this excellent (and scary) piece on welfare fraud risk calculation in Rotterdam https://www.wired.com/story/welfare-state-algorithms/
Inside the Suspicion Machine

Obscure government algorithms are making life-changing decisions about millions of people around the world. Here, for the first time, we reveal how one of these systems works.

WIRED
@bigfishrunning @TimWardCam @Loukas
I love how Google DeepMind tried to compensate for those biases and ended up overdoing it in their Gemini model
https://youtu.be/Fr6Teh_ox-8
@TimWardCam @bigfishrunning @Loukas But you don’t care if one subset of your customers gets unfairly hit by your algorithms, you only care about the mean. What is the difference between this and prejudice?
@ahltorp @bigfishrunning @Loukas I'm not aware of that happening. But I work on the nuts and bolts of the engineering, I'm not a data scientist.
@TimWardCam @ahltorp @bigfishrunning it's well-documented and I encourage you to inform yourself about this aspect of your job.

@TimWardCam You're using the etymological fallacy[1] to (try to) claim that the word "prejudice" doesn't or can't mean things like "an adverse opinion or leaning formed without just grounds" or "an irrational attitude of hostility directed against an individual, a group, [or] a race". But it can[2], and it's *quite clear* that @Loukas and @bigfishrunning meant it that way.

1: https://en.wikipedia.org/wiki/Etymological_fallacy
2: https://www.merriam-webster.com/dictionary/prejudice, sense 2

Etymological fallacy - Wikipedia