One overused cliche I see in discussions about “ethical AI” is the idea of making autonomous systems, robots, etc, “three laws compliant”.

While it is obviously a credit to the imagination of Asimov, I find it to be a very clear sign that the people who say that robots need to follow these laws IRL haven’t actually read his novels. You only need to read the first few stories that Asimov wrote to understand “oh, huh, these Three Laws don’t work”.

The Three Laws are a literary device, not a scientific one. Asimov only invented them to explore the conflict between the three laws and to explore the conflict between artificial intelligences and human intelligence. They are deliberately vague and loose to be the vehicle of which Asimov explores his stories through.

They are, in essence, a thought experiment.

Most crucially and most importantly: you can’t apply them to real robots/AI, because unlike Asmiov’s fictional creations, no autonomous system that exists today actually has the ability of foresight or reason in a way that would allow them to come to a conclusion over whether they are following The Three Laws.

The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

What does “inaction” mean? What is a “human being”? Is it still against the Three Laws if a robot kills a human, in the mistaken belief that it’s not a robot? What counts as “it’s own existence”?, etc etc. These are literary questions.

Let’s take an example here.

How do you define “human being”? It depends on who does the programming.

You could program a killbot to define “human being” as “anyone who speaks English”. Therefore, if it detects someone who doesn’t speak English, they’re not human and therefore the bot can harm them.

Of course… You don’t need a hypothetical to do that, because there have many times in history where a group was defined as “subhuman” or “undesirables” and therefore it was appropriate to kill them.

@yassie_j This was heavily discussed in one of the later novels, I vaguely remember, where the robots didn't recognise Settlers as human, only Spacers.
@moof yes I recall this as well
@yassie_j the biggest takeaway from asimov’s stories regarding ethical AI is that if one wishes to ethically construct an artificial intelligence, one must not treat it as lesser than humans