One overused cliche I see in discussions about “ethical AI” is the idea of making autonomous systems, robots, etc, “three laws compliant”.

While it is obviously a credit to the imagination of Asimov, I find it to be a very clear sign that the people who say that robots need to follow these laws IRL haven’t actually read his novels. You only need to read the first few stories that Asimov wrote to understand “oh, huh, these Three Laws don’t work”.

The Three Laws are a literary device, not a scientific one. Asimov only invented them to explore the conflict between the three laws and to explore the conflict between artificial intelligences and human intelligence. They are deliberately vague and loose to be the vehicle of which Asimov explores his stories through.

They are, in essence, a thought experiment.

Most crucially and most importantly: you can’t apply them to real robots/AI, because unlike Asmiov’s fictional creations, no autonomous system that exists today actually has the ability of foresight or reason in a way that would allow them to come to a conclusion over whether they are following The Three Laws.

I also find it absolutely absurd that we are asking corporations to put in ethical guardrails themselves.

AI is a business. Businesses exist to make money. Businesses barely care about worker safety.

Why would they care about the philosophical implications of their machines when corporations themselves do harm to human beings?

@[email protected] i feel like a big thing abt ai tho is that it kinda doesn't make money
@yassie_j Also why should we trust someone about "ethical AI" if they don't start by recommending corporations to be abolished? Adding computers and algorithms does not start making these unethical.

@yassie_j safety teams in ai companies only existed for the purpose of convincing governments that ai is safe so that they could sell products without worrying about government stopping them ("look, we are serious about safety, we have an entire team for that")

those teams didn't do anything other than existing, and nowadays ai companies pretty much got rid of those teams, now that they convinced governments

@sugar @yassie_j companies got rid of safety teams the moment people on these safety teams started saying that what these companies are doing might not be completely safe

@yassie_j People need to understand that The Three Laws of Robotics weren't designed to be a serious attempt to design robotic algorithms, they were designed to create fictional story puzzles.

They were purposefully NOT designed to be fool-proof, they were purposefully designed to fail in interesting ways (for fictional story purposes).

@isaackuo @yassie_j it's like these people don't understand books, which makes even more sense when you see elon's take on hitchhikers guide

@yassie_j Lawsuits cost money. While corporations may not care about ethics exactly, they still have incentive to keep AI operating in certain legal bounds.

Also, the issue of AI alignment deals with getting AI to do you want, ethical or not. If corporations want to trust LLMs or any AI to do anything of importance, then having some methods or means to constrain their behavior is pretty important.

Unfortunately for the corporations buying into the LLM craze, turns out that constraining the behavior of spurious correlation machines is pretty hard... to say the least.