One overused cliche I see in discussions about “ethical AI” is the idea of making autonomous systems, robots, etc, “three laws compliant”.

While it is obviously a credit to the imagination of Asimov, I find it to be a very clear sign that the people who say that robots need to follow these laws IRL haven’t actually read his novels. You only need to read the first few stories that Asimov wrote to understand “oh, huh, these Three Laws don’t work”.

The Three Laws are a literary device, not a scientific one. Asimov only invented them to explore the conflict between the three laws and to explore the conflict between artificial intelligences and human intelligence. They are deliberately vague and loose to be the vehicle of which Asimov explores his stories through.

They are, in essence, a thought experiment.

Most crucially and most importantly: you can’t apply them to real robots/AI, because unlike Asmiov’s fictional creations, no autonomous system that exists today actually has the ability of foresight or reason in a way that would allow them to come to a conclusion over whether they are following The Three Laws.

The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

What does “inaction” mean? What is a “human being”? Is it still against the Three Laws if a robot kills a human, in the mistaken belief that it’s not a robot? What counts as “it’s own existence”?, etc etc. These are literary questions.

Let’s take an example here.

How do you define “human being”? It depends on who does the programming.

You could program a killbot to define “human being” as “anyone who speaks English”. Therefore, if it detects someone who doesn’t speak English, they’re not human and therefore the bot can harm them.

Of course… You don’t need a hypothetical to do that, because there have many times in history where a group was defined as “subhuman” or “undesirables” and therefore it was appropriate to kill them.

@yassie_j This was heavily discussed in one of the later novels, I vaguely remember, where the robots didn't recognise Settlers as human, only Spacers.
@moof yes I recall this as well
@yassie_j the biggest takeaway from asimov’s stories regarding ethical AI is that if one wishes to ethically construct an artificial intelligence, one must not treat it as lesser than humans

I also find it absolutely absurd that we are asking corporations to put in ethical guardrails themselves.

AI is a business. Businesses exist to make money. Businesses barely care about worker safety.

Why would they care about the philosophical implications of their machines when corporations themselves do harm to human beings?

@[email protected] i feel like a big thing abt ai tho is that it kinda doesn't make money
@yassie_j Also why should we trust someone about "ethical AI" if they don't start by recommending corporations to be abolished? Adding computers and algorithms does not start making these unethical.

@yassie_j safety teams in ai companies only existed for the purpose of convincing governments that ai is safe so that they could sell products without worrying about government stopping them ("look, we are serious about safety, we have an entire team for that")

those teams didn't do anything other than existing, and nowadays ai companies pretty much got rid of those teams, now that they convinced governments

@sugar @yassie_j companies got rid of safety teams the moment people on these safety teams started saying that what these companies are doing might not be completely safe

@yassie_j People need to understand that The Three Laws of Robotics weren't designed to be a serious attempt to design robotic algorithms, they were designed to create fictional story puzzles.

They were purposefully NOT designed to be fool-proof, they were purposefully designed to fail in interesting ways (for fictional story purposes).

@isaackuo @yassie_j it's like these people don't understand books, which makes even more sense when you see elon's take on hitchhikers guide

@yassie_j Lawsuits cost money. While corporations may not care about ethics exactly, they still have incentive to keep AI operating in certain legal bounds.

Also, the issue of AI alignment deals with getting AI to do you want, ethical or not. If corporations want to trust LLMs or any AI to do anything of importance, then having some methods or means to constrain their behavior is pretty important.

Unfortunately for the corporations buying into the LLM craze, turns out that constraining the behavior of spurious correlation machines is pretty hard... to say the least.

@yassie_j also, more importantly, we don’t have sentient AI and are decades, if not centuries, and a complete societal overhaul away from it.

So the whole thing is extra-pointless. We don’t need clearer instructions for a LLM, we need clearer — and binding — instructions for the people using snd providing it.
@orangelantern that’s right. The problem is not the systems, it is the operators and creators who need to be bound by ethical constraints.

@yassie_j
While I do agree they aren't actually applicable to real world AI

I very very much disagree with this interpretation of Asimov's writing. He very obviously saw his robots as being better than people. Yes the stories that take place in the early days of robot development lean heavily on exploring the contradictions of the laws. But by the later books those contradictions are mostly worked out as technical design issues and not fundamental limitations of the laws themselves. Then by the end of the foundation novels Asimov essentially says humanity itself needs a version of the 3 laws with the whole Gaia plot line. Truly I think Asimov saw his robots as something to aspire to.

@gnomekat that’s very true! Asimov was not exactly a morally competent person himself, and that does reflect in his writings towards the later stages of his works
@gnomekat oh yes the Zeroth Law is a good example of what you were saying

@gnomekat @yassie_j Somewhere in his writings, Asimov wrote that he created his robots as a reaction to the already existing genre of stories about destructive and aggressive robots. He wanted to put something against it, and the robot laws were a means to make this possible. (I must have read it in one of his anthologies, in the introductions he wrote about his own stories.)
Already in his “I Robot“ stories, he begins with robots that act strange but never really dangerous, and always explainable by the three laws, and ends with two stories about a (probable) robot who governs humanity better than a human can.

That being said, our current AI and tech companies have shown that even when they could implement the three laws, they would not because it was too expensive. 🙂

@yassie_j
What gall! What impertinence!
Next, you will tell us that the Turing Test is not a valid method to detect consciousness! 
@wakame @yassie_j https://www.youtube.com/watch?v=5CKuiuc5cJM is fun and I feel like I know more about how much I don't know, which is a good outcome, imo.
ChatGPT isn't Smart. It's something Much Weirder

YouTube
@yassie_j THANK YOU!!! The point of the three laws is they're insufficient
@0x4d6165 yeh!! The Laws are absolutely not flawless because they are not laws!!! They’re literary devices. It’s like saying that we should base our society around the Wizard of Oz

@yassie_j I think it's pretty firmly established at this point that the tech bros don't understand thought experiments. Or maybe even thought.

These are the people who cooked their brains on weapons-grade deliriants and scared themselves with Roko's basilisk

@yassie_j "Three Laws AI" techbro motherfuckers when I tell them that the Turing test was not meant to be a literal test either.

For fuck's sake, even the fucking Will Smith movie got "the three laws don't actually work" correct.

@yassie_j Asimov wrote the First Law the way he did because he read 19th-century poetry. It was inspired by Arthur Hugh Clough's "The Latest Decalogue".

Thou shalt have one God only; who
Would be at the expense of two?
No graven images may be
Worshipp'd, except the currency:
Swear not at all; for, for thy curse
Thine enemy is none the worse:
At church on Sunday to attend
Will serve to keep the world thy friend:
Honour thy parents; that is, all
From whom advancement may befall:
Thou shalt not kill; but need'st not strive
Officiously to keep alive:
Do not adultery commit;
Advantage rarely comes of it:
Thou shalt not steal; an empty feat,
When it's so lucrative to cheat:
Bear not false witness; let the lie
Have time on its own wings to fly:
Thou shalt not covet; but tradition
Approves all forms of competition.

@yassie_j unfortunately I find Harry Harrison's War with the robots is probably closer to the way things will turn out.
War with the Robots - Wikipedia

@yassie_j also, correct me if I'm wrong please, there's also a quote from someone that a system cannot police itself from within?
@yassie_j You make reasonable points. But in reading Asimov's robot work it is clear that human manipulation of AI conscience or thought processing is what causes the violation of the laws.
@yassie_j If you read Asimov's stories involving the 3 laws, they are all about situations in which the 3 laws are problematic.