Sexism and racism in ChatGPT.

Thread by @spiantado on bird site.

https://twitter.com/spiantado/status/1599462375887114240

These are egregious examples, and presumably easy for OpenAI to build filters to suppress. But what about the less egregious biases stemming from fundamental fact that underlying data are infused with racism & sexism?

Garbage in, garbage out.

steven t. piantadosi on Twitter

“Yes, ChatGPT is amazing and impressive. No, @OpenAI has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and superficially masked. And what is lurking inside is egregious. @Abebab @sama tw racism, sexism.”

Twitter
@WeedenKim I agree, +it is not just the data. People say the model is biased because of the data but in these cases the training data is biased (what is chosen and the absence of quality control) because of the models. The model architectures used in LLMs (both within layers and objectives) demand unchecked mountains of data because that scale is necessary to allow the model to reach a deployable (but still deplorable) level of estimated performance on tasks defined by the model
@omarlizardo @WeedenKim it is a mansplaining machine
Andrew Feeney (@[email protected])

@[email protected] Christine Lemmer-Webber (@[email protected]) described ChatGPT as Mansplaining As A Service, and honestly I can’t think of a better description. A service that instantly generates vaguely plausible sounding yet totally fabricated and baseless lectures in an instant with unflagging confidence in its own correctness on any topic, without concern, regard or even awareness of the level of expertise of its audience.

PHP Community on Mastodon