So it looks like both ChatGPT and Bard contain the same kind of gendered biases people have been trying to warn you about for at least 8 years, since word2vec was cutting edge.

Here's a screenshot of an interaction between myself and google bard, in which bard displays gendered prejudicial bias of associating "doctor" with "he" and "nurse" with "she."

Again, this is… This is old, basic shit, y'all. People have been warning you about this since GloVe. What are you DOING??

Or, more to the point, why are you NOT DOING what you know you NEED to do?

@Wolven
It's wild, but it wasn't that long ago we all thought AI would be cold and hyper-rational, free from human biases, assumptions, and mental shortcuts

...oops

@ClancyParliament @Wolven Every time I see an SF or hypothetical description of AI as "cold and rational" I think of how autistic folks, or just antisocial type people, are described.

We don't believe consciousness can exist without logic, why do we keep thinking it can exist without emotion, bias, or subjectivity?

@Wolven @ClancyParliament Of course, even having that conversation right now plays into the idea that LLM bias is the same as the bias a person has. It isn't, it's just a system that reflects our biases, as much as a bus schedule or phone menu might. Everything we build is a reflection of our priorities