ChatGPT ties itself in knots to avoid having professors be female.

Source: https://twitter.com/ndyjroo/status/1649821809154613248?s=61&t=Ugdi4XBKf_2ovJ1y9hKs4w

Andrew Garrett on Twitter

“ChatGPT ties itself in knots to avoid having professors be female.”

Twitter
@Riedl @cigitalgem It's not ChatGPT that's doing this. Remember that ChatGPT is a stochastic parrot—the training data, which reflects society and societal attitudes, views professors as male. ChatGPT is just the messenger. (Aside: this is why I couldn't, for example, teach in a public university in, say, Florida—I can't honestly discuss some ML issues without talking about societal sexism and racism.)
Dr. Damien P. Williams, Magus (@[email protected])

Attached: 2 images So it looks like both ChatGPT and Bard contain the same kind of gendered biases people have been trying to warn you about for at least 8 years, since word2vec was cutting edge. Here's a screenshot of an interaction between myself and google bard, in which bard displays gendered prejudicial bias of associating "doctor" with "he" and "nurse" with "she." Again, this is… This is old, basic shit, y'all. People have been warning you about this since GloVe. What are you DOING?? Or, more to the point, why are you NOT DOING what you know you NEED to do?

Mastodon
@bornach @Riedl @cigitalgem Call it stochastic racism and sexism.
@SteveBellovin @Riedl @cigitalgem
Mixed success when trying this with Open Assistant but it seems more effort was put into removing bias from the training data but sexism is still clearly present

@SteveBellovin @Riedl @cigitalgem

Stochastic geographic bias as well. Most training data comes from USA and Toronto, Canada is further north than most cities mentioned in the training data. Hence why I cannot seem to get the Large Language Model to admit Toronto is further south than Windsor, UK
https://masto.ai/@bornach/110248855948410680

ChatGPT got this wrong as well
https://youtu.be/cP5zGh2fui0?t=11m30s

Bornach (@[email protected])

Attached: 1 image Testing #OpenAssistant with a geography query I saw @[email protected] try with #ChatGPT. Got a similar failure. Then I tried in futile to convince it to change its answer. Notice how it pretends to agree with me but imply that my answer said the opposite of what I said. No wonder the term "gaslighting" is being used to describe these type of LLM failure #LargeLanguageModels

Mastodon
Expert Insight: Dangers of Using Large Language Models Before They Are Baked

Today's LLMs pose too many trust and security risks.

Dark Reading