One overused cliche I see in discussions about “ethical AI” is the idea of making autonomous systems, robots, etc, “three laws compliant”.

While it is obviously a credit to the imagination of Asimov, I find it to be a very clear sign that the people who say that robots need to follow these laws IRL haven’t actually read his novels. You only need to read the first few stories that Asimov wrote to understand “oh, huh, these Three Laws don’t work”.

The Three Laws are a literary device, not a scientific one. Asimov only invented them to explore the conflict between the three laws and to explore the conflict between artificial intelligences and human intelligence. They are deliberately vague and loose to be the vehicle of which Asimov explores his stories through.

They are, in essence, a thought experiment.

Most crucially and most importantly: you can’t apply them to real robots/AI, because unlike Asmiov’s fictional creations, no autonomous system that exists today actually has the ability of foresight or reason in a way that would allow them to come to a conclusion over whether they are following The Three Laws.

@yassie_j
While I do agree they aren't actually applicable to real world AI

I very very much disagree with this interpretation of Asimov's writing. He very obviously saw his robots as being better than people. Yes the stories that take place in the early days of robot development lean heavily on exploring the contradictions of the laws. But by the later books those contradictions are mostly worked out as technical design issues and not fundamental limitations of the laws themselves. Then by the end of the foundation novels Asimov essentially says humanity itself needs a version of the 3 laws with the whole Gaia plot line. Truly I think Asimov saw his robots as something to aspire to.

@gnomekat @yassie_j Somewhere in his writings, Asimov wrote that he created his robots as a reaction to the already existing genre of stories about destructive and aggressive robots. He wanted to put something against it, and the robot laws were a means to make this possible. (I must have read it in one of his anthologies, in the introductions he wrote about his own stories.)
Already in his “I Robot“ stories, he begins with robots that act strange but never really dangerous, and always explainable by the three laws, and ends with two stories about a (probable) robot who governs humanity better than a human can.

That being said, our current AI and tech companies have shown that even when they could implement the three laws, they would not because it was too expensive. 🙂