So it looks like both ChatGPT and Bard contain the same kind of gendered biases people have been trying to warn you about for at least 8 years, since word2vec was cutting edge.

Here's a screenshot of an interaction between myself and google bard, in which bard displays gendered prejudicial bias of associating "doctor" with "he" and "nurse" with "she."

Again, this is… This is old, basic shit, y'all. People have been warning you about this since GloVe. What are you DOING??

Or, more to the point, why are you NOT DOING what you know you NEED to do?

@Wolven AFAIK when these trained neural nets answer a question theyre making an inference based on a kind of probabilistic bet on what the best answer might be. based on the input data set it trained on

if one researchs the gender percentage breakdown of doctors and nurses certain trends are seen in the real world. one source I saw said that around 60% of doctors are male, and 86% of nurses are women. which also conforms to my own firsthand (anecdotal) experience dealing with medical folks

@synlogic Yep, and there is a LOT OF LITERATURE about why that's bad.
@Wolven @synlogic right, but do you know of any techniques that prevent it from happening? I am fairly confident the answer to your question is that... there isn't a solution to this problem that is implementable.

@bsweber @Wolven @synlogic I'm not an expert but I think you can just do a bunch of reinforcement learning? Basically, just have humans keep providing these sort of prompts, downvote responses that make this mistake, explain the mistake. Do it enough times and it'll at least stop making the exact same mistake.

You can also just fill the model with lots of text explicitly designed to work around this (include more mentions in your corpus of female doctors and male nurses).

This is imperfect and time consuming but ... it's something.

@bsweber @Wolven @synlogic That said, OpenAI does a lot of this and it apparently doesn't work with ChatGPT yet.

So close ... and still so far.