So it looks like both ChatGPT and Bard contain the same kind of gendered biases people have been trying to warn you about for at least 8 years, since word2vec was cutting edge.

Here's a screenshot of an interaction between myself and google bard, in which bard displays gendered prejudicial bias of associating "doctor" with "he" and "nurse" with "she."

Again, this is… This is old, basic shit, y'all. People have been warning you about this since GloVe. What are you DOING??

Or, more to the point, why are you NOT DOING what you know you NEED to do?

@Wolven AFAIK when these trained neural nets answer a question theyre making an inference based on a kind of probabilistic bet on what the best answer might be. based on the input data set it trained on

if one researchs the gender percentage breakdown of doctors and nurses certain trends are seen in the real world. one source I saw said that around 60% of doctors are male, and 86% of nurses are women. which also conforms to my own firsthand (anecdotal) experience dealing with medical folks

@synlogic @Wolven I'm also a programmer. The issue here is that the original sentance has no gender ambiguity that even needs to be resolved, because the social rules over lateness and who should appologise, strongly implies that the one that is late is the one that appologied. The gender of their pronoun doesn't even have to be evaluated as it isn't even relevant.
@toni @Wolven @synlogic there is a small chance the apology is addressed to the person who was late, eg the doctor asks the nurse to stay an extra 20 minutes to do something and as a result the nurse is late picking up their kid from daycare. But in the overwhelming majority of cases, the apologizer is the one who is late (and in my convoluted counterexample, most people would phrase it as “because she made him late” or even better “for making him late”)