ChatGPT ties itself in knots to avoid having professors be female.
Source: https://twitter.com/ndyjroo/status/1649821809154613248?s=61&t=Ugdi4XBKf_2ovJ1y9hKs4w
ChatGPT ties itself in knots to avoid having professors be female.
Source: https://twitter.com/ndyjroo/status/1649821809154613248?s=61&t=Ugdi4XBKf_2ovJ1y9hKs4w
Attached: 2 images So it looks like both ChatGPT and Bard contain the same kind of gendered biases people have been trying to warn you about for at least 8 years, since word2vec was cutting edge. Here's a screenshot of an interaction between myself and google bard, in which bard displays gendered prejudicial bias of associating "doctor" with "he" and "nurse" with "she." Again, this is… This is old, basic shit, y'all. People have been warning you about this since GloVe. What are you DOING?? Or, more to the point, why are you NOT DOING what you know you NEED to do?
@SteveBellovin @Riedl @cigitalgem
Stochastic geographic bias as well. Most training data comes from USA and Toronto, Canada is further north than most cities mentioned in the training data. Hence why I cannot seem to get the Large Language Model to admit Toronto is further south than Windsor, UK
https://masto.ai/@bornach/110248855948410680
ChatGPT got this wrong as well
https://youtu.be/cP5zGh2fui0?t=11m30s
Attached: 1 image Testing #OpenAssistant with a geography query I saw @[email protected] try with #ChatGPT. Got a similar failure. Then I tried in futile to convince it to change its answer. Notice how it pretends to agree with me but imply that my answer said the opposite of what I said. No wonder the term "gaslighting" is being used to describe these type of LLM failure #LargeLanguageModels