If you want to know why people don't trust #OpenAI or Microsoft or Google to fix a broken faux-#AGI #chatbot #LLM, consider that using suicidal teens for A/B testing was regarded as perfectly fine by a Silicon Valley "health" startup developing "#AI"-based suicide prevention tools.
(Aside: This is also where we get when techbros start doing faux-utilitarian moral calculus instead of just not doing obviously unethical shit.)