Lots of folks warning that overreliance on AIs can lead to bias.

But that can sound a bit abstract, so let's just leave these examples here.

#CHATGPT #AI #bias

What's happening here is two things.

First an assumption that if information is there it must be relevant to the question. Often that's the case, but sometimes it's not! The AI is bad at determining this.

Second, once it has determined it, it's assigning scores to the properties to try and fit the question, and the relative score is (opaquely) based on its training input, since that's usually what you want. But here that's just reflecting the input bias (that is existing social biases) back.

It's one of those things that's sort of true and not true at the same time.

The AI isn't /inherently/ biased. The code itself doesn't act in a way that intentionally encodes obnoxious biases. The programmers didn't do this on purpose.

But the *training set* introduces biases, because it's based on vast sums of human social experience and *that* is systemically biased.

So anyway, be v careful about delegating major decisions to AI or treating it as "unbiased" because it's code.

A final point: these are particularly obvious examples, but real life ones can be much more insidious.

There's been cases where AIs have done cool/horrifying things to circumvent anti-biasing.

One great example was an AI that was "blinded" against race when making life changing decisions.

Horray! We fixed the racism problem!

But alas...

the AI was smart enough to synthesize a proxy for race to implement racist decisions.

That's because race correlated well with the variable it was trying to match in the training data because of underlying racism, but after being "blinded" to race, it discovered that postcode—in this case used as a proxy for race—was a great correlating factor to the system it was trying to replace.

And it didn't *tell* anyone it was doing this. It just derived it itself.

Just like human kids can learn to hate or be biased by growing up in a biased world, AIs can learn to be biased or hateful too by growing up in a biased training set.

And since AIs need vast quantities of data to learn from, they have a tendency to learn from datasets that can't be sanitized away from encoding human biases.

So be careful delegating too much to them in critical decisions affecting humans. Often they are a mirror to society; and can reflect both its best and worst.

By way of example of how AIs can use secret proxy variables while considering themselves unbiased, think about what "low crime postcode" might be a proxy for here.
@Pwnallthethings Credit score may be an even more biased variable
@fish @Pwnallthethings the propriatory system that the company won't disclose, and at least one has been shown to implicitly include race as a factor...