So it looks like both ChatGPT and Bard contain the same kind of gendered biases people have been trying to warn you about for at least 8 years, since word2vec was cutting edge.

Here's a screenshot of an interaction between myself and google bard, in which bard displays gendered prejudicial bias of associating "doctor" with "he" and "nurse" with "she."

Again, this is… This is old, basic shit, y'all. People have been warning you about this since GloVe. What are you DOING??

Or, more to the point, why are you NOT DOING what you know you NEED to do?

@Wolven Kind of amazing to me how I learned in a 300-level college course how to make ML models that produce vectors of words then do operations like "(king - man + woman) = queen" and yet current networks are not designed to be able to embed words into a space neutrally - that is, extract a dimension like gender from the meaning. I'm sure it's much more complex than what I did in college but then again, the devs at OpenAI and Google are being paid much more than I am!
@HeatherNatalie Precisely all of this. Just shocking shit