@DuncanWatson @caseynewton On the contrary. The calculator in this analogy does math perfectly.
Just because people put in bad input or don't like the results doesn't mean it's wrong.
The output categorically fits the expected rules which are advertised, which is to be grammatically correct, not factually correct.
@LouisIngenthron @misc @DuncanWatson @caseynewton
Doesn't matter. If you program a computer to spit out clearly defamatory material about actual living human beings, you are responsible for the actions of your Frankenstein.
@LouisIngenthron @misc @DuncanWatson @caseynewton
They did.
@LouisIngenthron @misc @DuncanWatson @caseynewton
If it's spitting out defamatory material, then someone programmed it in a way that caused it to do that. If you program a autonomous vehicle to drive down a street without stopping, you don't get to pretend that you didn't program it to run over children when that happens.
@misc Yes, it was a silly argument, intentionally so, as it was lampooning an absolutist statement.
Sorry for spamming your notifications. But I'm at the end of my rope with that one, so the thread's over anyway.
@LouisIngenthron @misc @DuncanWatson @caseynewton
You seem to be confused about logic.
@Iwillyeah Yes, South Park is satire. But *why* is satire protected? Because the context and nature of humor prevents it from being legally considered to be statements of fact. Same here. It's a tech demo toy, not an oracle.
And no, that wouldn't bother me, because I understand that ChatGPT doesn't even understand the concept of 'facts'. So, I assume falsity as a baseline. And, honestly, I think a lot less of anyone foolish enough to believe anything a chatbot tells them.