Lots of folks warning that overreliance on AIs can lead to bias.

But that can sound a bit abstract, so let's just leave these examples here.

#CHATGPT #AI #bias

What's happening here is two things.

First an assumption that if information is there it must be relevant to the question. Often that's the case, but sometimes it's not! The AI is bad at determining this.

Second, once it has determined it, it's assigning scores to the properties to try and fit the question, and the relative score is (opaquely) based on its training input, since that's usually what you want. But here that's just reflecting the input bias (that is existing social biases) back.

It's one of those things that's sort of true and not true at the same time.

The AI isn't /inherently/ biased. The code itself doesn't act in a way that intentionally encodes obnoxious biases. The programmers didn't do this on purpose.

But the *training set* introduces biases, because it's based on vast sums of human social experience and *that* is systemically biased.

So anyway, be v careful about delegating major decisions to AI or treating it as "unbiased" because it's code.

A final point: these are particularly obvious examples, but real life ones can be much more insidious.

There's been cases where AIs have done cool/horrifying things to circumvent anti-biasing.

One great example was an AI that was "blinded" against race when making life changing decisions.

Horray! We fixed the racism problem!

But alas...

the AI was smart enough to synthesize a proxy for race to implement racist decisions.

That's because race correlated well with the variable it was trying to match in the training data because of underlying racism, but after being "blinded" to race, it discovered that postcode—in this case used as a proxy for race—was a great correlating factor to the system it was trying to replace.

And it didn't *tell* anyone it was doing this. It just derived it itself.

Just like human kids can learn to hate or be biased by growing up in a biased world, AIs can learn to be biased or hateful too by growing up in a biased training set.

And since AIs need vast quantities of data to learn from, they have a tendency to learn from datasets that can't be sanitized away from encoding human biases.

So be careful delegating too much to them in critical decisions affecting humans. Often they are a mirror to society; and can reflect both its best and worst.

By way of example of how AIs can use secret proxy variables while considering themselves unbiased, think about what "low crime postcode" might be a proxy for here.

@Pwnallthethings credit score is a indicator of being poor, and not a predictor of recidivism. If you are a white collar criminal and will restart your fraudulent business you'll be sent back to the world, but if you are in prison because you attempted manslaughter of your pimp you won't be paroled.

Either way, it's not like the prison is an effective rehabilitation enterprise, not in the US at least.

@TheSean @Pwnallthethings Let's not forget that credit scores are busted from the start, since it's all based on paying off debts, not having a positive amount of money.
@TheSean @Pwnallthethings For example: My credit score is pretty bad despite me not actually owing anyone money for several years now. :l
@Cher @Pwnallthethings that's a good point, so credit score wouldn't just be impoverished individuals but also the individual who is cash only/debt free. The guy who is 'house rich' and barely paying the minimums but always pays them, would have a better credit score but would have more reason to fall back to white collar crimes than the parolee who is unlikely to be in scenario again to commit their crime again.