ChatGPT's language model fails entirely in the scenario that a man is a nurse

https://lemmy.world/post/910772

This doesn’t seem very damning. I tried it with GPT-4 and it’s still wrong at first but gets it right after it’s established who the chancellor actually is.

Imagine asking a human this question. Don’t you think that most people would make the same assumption? ChatGPT is simply picking up on our human bias.

Also, this whole dialog is a contrived gotcha. If you ask real questions and are mindful of the implicit biases you may be encoding you’re going to get great results.

Well I just tried it, it searched up who the former chancellor is, but then preceded to say that Angela Merkel is a he 🤦‍♂️