https://www.fastcompany.com/90844066/chatgpt-write-performance-reviews-sexist-and-racist
So it's a kind of mirror? Could be a good thing.
I guess maybe we …. haven’t? … eliminated sexism from human conversation?
A plausible liar that echoes the biases of its training corpus!
Obviously. Underpaying and psychologically abusing Kenyan labour pays big dividends for investors in the global north. A potential use case: https://cyberplace.social/@GossiTheDog/109802533208388645 #AI #ML #ChatGPT #OpenAI #Kenya #Colombia #Law #BadDecisions #Society
@amydiehl is this unexpected for anyone? As a model that has been trained over human texts, the normal results is that it learns the biases of those. So if the majority of the text comes from white males, the output of the AI will look like the mean of those.
Removing or rebalancing those biases is really hard and I don't think we have solved the problem yet.
When using these kind of tools we have to be mindful of their limitations
Machine learning models are the perfect example of "garbage in, garbage out", but the garbage in is just our culture.
It's not surprising to parents that ChatGPT reflects what it sees. It's just like a child imitating the adults' behaviour.
It needs a firm guidance counselor - and of course that means we have to argue what principles should move the guidance counselor - but obviously if left to its own devices ChatGPT will act as a mirror of who we are collectively. That's probably its most useful function right now, – if we care to see.
@amydiehl when I was having it make cover letters for my students to critique it was overconfident by default, unless it thought it was writing as a woman, then it undersold its skills.
Great to see how we're baking in the exact types of misogyny we want to get rid of :/