@Okanogen @denki @JohannaMakesGames I think they're missing a point as well. The (currently used) process of training an LLM effectively amplifies biases in the training set. Even when the training set has a "small but generally acceptable bias" the resulting LLM may be considered "unacceptably biased". It's very hard to make a system that is made to discriminate (a chair from a hotdog or a dog from a cat) but still be ethical in the ways we want it to be ethical.