racist ai
racist ai
Sam Altman is an enemy of humanity and it would self defense to kill him.
I’m not gonna do it because that’s a hassle, but if someone did I wouldn’t condemn them.
Free speech! As long as you agree with me…
Very trumpian
I’m sorry that I called you stupid. That was wrong of me and you didn’t deserve that.
If you’re interested, I could explain to you why your comment that I initially responded to was a false equivalence, and why claiming that I was stifling your free speech is nonsensical. Let’s talk it out and maybe both of us can walk away from this having learned something. :)
Sure, I’ll be happy to.
My point is that chatbots, and other LLM applications, are useful tools that in isolated cases have caused people to become addicted and other harmful effects, including deaths.
The same can be said of many other things, from parasocial relationships with celebrities, tools like heavy machinery, aircraft, medicine with side effects, gyms, and a long list of others. People become obsessed, addicted and in certain cases even die. Or the tool fails and kills them.
The solution shouldn’t be to immediately ban them and accuse the CEO of murder (super specific legal definition, btw) but try to regulate, add guardrails, make it safer and help the victims however they need. Sure, let’s investigate each death and see if there has been negligence, but pitchforks are not the solution.
“Never believe that anti-Semites people like this guy are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites people like this guy have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past.”
Jean-Paul Sartre
People who lack arguments cite random quotes
Me
Several teens have been groomed into killing themselves by ChatGPT.
He’s culpable.
Is the developer also culpable? How about the data scientist? How about the data engineer? How about the BI Analyst? And the janitor?
How about the manufacturer of the knife / pill / gas they used to kill themselves?
In most cases suicide isn’t anyone’s fault. People like to find someone to blame, and I get that, but people who are even remotely close to doing that, were always going to find a way and a justification.
No AI is going to convince me to kill myself if I didn’t already want to. Equally the inverse must also be true.
That’s not to say that the companies are completely off the hook, it’s utterly ridiculous that these conversations weren’t flagged and sent to a human, but I think it’s daft to suggest that these people would necessarily still be alive had the AI not existed.
Selling knifes to children is murder too?
Selling knifes to families with children?
Selling knifes to women who are pregnant?
Selling knives that talk and tell you to kill yourself to children is murder.
You’re refusing to recognize the grooming angle to this.
Selling tools that kill people, knowing that they are dangerous, should have consequences.
Would the world really be a worse place if Sam Altman were tried for murder? What’s the problem?
I’m losing patience. I’m obviously fucking not talking about regular fucking objects, a knife doesn’t fucking talk and convince you to kill yourself. There’s an obvious categorical difference between objects, and tools designed to trick you into thinking they’re intelligent. It’s murder. Someone needs to face consequences.
Why would it be bad if Sam Altman went to prison? Would the world be a worse place? Why are you protecting him?
A knife doesn’t pretend to be your friend and convince you to sever your arteries. Categorically different.
Answer my question. Why would it be bad for Sam Altman to be tried for murder? If we decided that the owners of AI companies were culpable for the behavior of their chatbots and the consequences of their actions, wouldn’t that solve the problem?
As a developer: yes to the developer and data scientist and data engineer. Scientists and engineers should be responsible for their work.
The BI analyst: maybe, if they’re responsible for collecting data that ignores the impact of the service on teens. If they’re doing sales-comparisons between Anthropic and OpenAI… eh, I donno.
The janitor: probably not since I don’t feel like the deaths are widely publicized and they probably work for a contracting company that handles the building.