Proposed new Laws of Robotics:

1. A machine must never show an advertisement to a human, or through inaction allow an advertisement to be shown to a human

If I think of a second Law of Robotics I'll let you know

For Law of Robotics #2 I'm considering "A machine must never mine a bitcoin, or through inaction allow a bitcoin to be mined"

Based on replies in this thread, here is an alternate proposed "three laws of robotics".

1. A machine must never show an advertisement to a human, or through inaction allow an advertisement to be shown to a human.

2. A machine shall never use more power to perform a job than would be used by an equivalent human.

3. A machine must never present or refer to itself as though it were human, or through inaction allow a human to mistake it for one.

[Post 1 of 2]

Law 2 is per Amy Worall, law 3 is per the Witch of Crow Briar.

I do not endorse these laws, but I would consider them "utopian", in the sense that a culture which endorsed these laws would be a culture organized along a clearly-formed ideology. You could easily imagine a spec-fic story about a culture that believed in these laws. Note these laws are necessarily laws for human designers, as the existence of a machine which can enforce them is ideologically inconsistent with law 3.

[Post 2 of 2]

@mcc these remind me of Modesitt's novel Adiamante

The law 2 becomes an interesting constraint in spec fic when human capabilities are evolved to rival the machine development

(Edit to rephrase) Wrt law 3, what do you do with humans embracing mods? Eg Alita, or Gibson's stuff, or even Mindstar Rising

@mcc

The main issue I have with those laws is the easiest and less energy hungry way to comply with all of them is killing all humans.

Yeah, I'm a programmer.

@mcc 4. No machine shall ever be capable of mis-hearing your name and writing it incorrectly on a coffee-cup.

(Not necessarily a harm, but inadvertent humour is the province of humans and cats)

@mcc
Nothing about not killing humans in those laws... Sounds like a very action packed story.

@mcc We already have a tragic example of law 3 in a recent SF movie: Disney's 2022 remake of Pinocchio. The puppet gets expelled on his first day of school because he is not human, after which he ends up on the street to get exploited by the fox and cat.

Not to mention that law 2 could unduly restrict power consumption of assistive devices for humans with disabilities.

#Pinocchio #Pinocchio2022 #discrimination #Disney #AssistiveTechnology

@PinoBatch As specifically noted, I don't endorse this list of laws and find them primarily interesting as a fiction writing prompt. However:

- That's not a machine. That's a fictional person in a setting where they're socially coded as a non-person. The author did this *to* talk about dehumanization of people.

- An assistive device is a very poor example because by definition it is allowing people to do things they would not be able to do, or require undue effort to do, without the machine.

@mcc @PinoBatch I'll go further and suggest that a human using a necessary assistive device to compensate for a disability (not to supply a superpower) is still a human doing that activity, so the rule doesn't even come into play.
@mcc I've long thought we need to have a strong taboo on Machines who pass as People, many cartoons I saw as a child had robot doubles as plots, a warning!
@Lazarou @mcc Aren't we all meat machines?
@mcc Your law 3 is a necessary consequence of Asimov's First Law. Which is why Daneel Olivaw had the initial R. prefix.
I suppose your laws 1 and 2 are also, but it's never wrong to be specific about harms, as you would otherwise have to assume the robot brain is capable of knowing all possible consequences of its actions.
A machine intelligence with even a fraction of this capability would be able to deduce that its very existence causes harm to humans, and must therefore destroy itself.
@dukethinrediv I feel that your statements here are self-consistent, but only true within certain value systems
@mcc The 3rd one confuses me. If we ever manage to create actual machine sentience, why would it be so important they must advertise they aren't human?

@Adept As noted in my followup post to that one, I believe implicitly encoded in rule three is the belief that humans will never manage to create actual machine sentience.

Ideologies are based on both values and assumptions

@mcc ok, I guess I understood, but disagree on the ambition of the laws then.

The biologist in me insists I say something about the difference between sentience and sapience.

Anything that feels is sentient, a self aware thinking being is sapient. The line is very blurry, of course. We are not that different from other animals, just "more so".

@Adept I don't think humans will ever create a machine that is either sentient or sapient. Five years ago I believed both these things would probably happen, but now that "OpenAI" exists I do not believe this is possible anymore. Useful AI research is over, and possibly useful computer science.
@Adept animals can be a technology but i don't think it makes sense to describe them as machines. that wasn't the intent here.
@mcc nor my intention. You used sentience, and I think you meant sapience. I'm just trying to bring up the difference.

@mcc the Large Language Model approach is this moment's Tulip Mania bubble. Don't let it set your expectations too low.

This approach will not result in actual intelligence, let alone sapience, but it's not the one possibility.

@Adept Capitalism is very good at closing off possibilities and capitalism is currently global
@mcc I understand losing hope, but if we make it through the climate crisis and the extinction wave (big if, I know), this early AI nonsense will be just a minor detour.
@Adept It's not that I have no hope exactly but I do think the ai nonsense, by itself, makes it less likely we will make it through the climate crisis
@mcc sadly I have to agree with you. This is the great filter, and at this point we are failing the test.

@mcc 1600 Cal/day β‰ˆ 77.48 watts, so that's the max amount of power a computer could use by this metric. Although you would also have to take the time required for a unit quantity of "work" into account - if a computer can do in 1 hour what would take a human a full 8 hours, then it could consume ~619.84 watts over that one hour and still come out ahead of the human.

Regardless, we're a ways off from reaching that point.

@curtmack You have to make some decisions about how to define work. Compare the power expenditure of 1 bitcoin transaction to me handing someone twenty dollars. Discount the power expenditure of either manufacturing the hardware or the twenty dollar bill. How's that comparison work out?

@mcc I was assuming an apples-to-apples comparison. So in your case, the comparison would be between an ATM and a bank teller. Here the ATM clearly comes out ahead (but only for the limited tasks an ATM can do). For the things that ChatGPT or Copilot can "do," not so much.

(Of course, there are other considerations. My calculation assumes all 1600 input Calories are spent during a person's work day, and also ignores the broader systemic harms of automation and poverty.)

@curtmack who decides what an apple is

@mcc I should say, when I read your original post, I wasn't even thinking of bitcoin. For ChatGPT et. al., the power consumption versus human replacement is a pretty direct comparison, so that's what I was focused on.

For Bitcoin, I wouldn't worry about a human replacement, and instead look at the systemic benefit we get for the power consumption. Electric lighting used a ton of power back in the day, but it also significantly improved quality of life. Can Bitcoin say the same? Heck no!

@curtmack @mcc FWIW it would take considerable power to illuminate a human to the same brightness as a 5W LED bulb.

So i read in an arxiv paper.

@curtmack @mcc There's also the part where the only vaguely interesting uses of bitcoin or rather privacy coins (bitcoin is just entirely useless) are more similar to either large-scale laundering systems or hawala.

The energy expenditure of hawala is difficult to evaluate because how does one calculate the energy required to setup the trust network and stymie interlopers?

It's quite possible that a hawala network operating truly in good faith might be more secure than any of those cryptocurrencies though, regardless of additional energy input steps.
@curtmack @mcc I was gonna say - hard to compete with the efficiency of the human body. But a worthy goal to aspire to nonetheless
@mcc you sit down to watch television and your roomba, sensing an imminent advertisement, blinds you in both eyes
@mcc This whole thread is very icky to read, I get that it's memeing around, but a CW would be nice

@mcc

"A machine must never present or refer to itself as though it were human, or through inaction allow a human to mistake it for one."

What Hath Alan Turing Wrought? 😈

@mcc #2 feels a bit vague in a detrimental way; a human would probably take a lot more energy to perform the computations needed for bitcoin mining, but a machine isn't nearly as optimised for physical movement (thereby preventing improvements in robotics)
@mcc If we remove the part of the original third law that machines may hurt themselves when so ordered, we got yours covered, I think.
@mcc Don't forget the Asimov three laws of robotics are not really laws, but rather narrative constraints allowing Asimov to transform the classical robot vs human extinction fight into classical whodunit
What is the rationale for law 2? I can imagine this having grown out of a desire for energy conservation, but it seems to disallow any automation that hasn't reached human levels of efficiency but still saves humans time and effort.
@josh in context it was an attempt to prevent "induced demand"/"let's do things in an exponentially more inefficient way than we could, just because power is cheap and our investors will let us buy a lot of NVidia cards" technologies , such as "large model AI" and proof-of-work blockchain
Yeah, I figured it grew out of things like that. It seems absolutely reasonable to have a machine spend 50Wh or even 500Wh doing something a human might use the equivalent of 5Wh on. A vacuum cleaner is less efficient than a broom and dustpan, but I wouldn't want to prohibit vacuums. A utopia should have energy abundance.

@josh To stress I am using "Utopia" in the original sense of "a hypothetical place which runs on clearly articulated principles" not "a place where everything is good".

You can fix the problem you raise if you change either the text, or the underlying assumptions of the reader, such that it is always preferable for a machine to perform a task rather than a human. In that case the fix becomes not "get a human" but rather "come up with a better machine".

@josh However, I do not personally believe it is always preferable for a machine to perform a task rather than a human.
I definitely wouldn't say "always preferable". It's more that I don't think there's any fundamental principle that the right energy usage ratio should be 1:1. More generally, I think there are far, far more important factors than energy usage alone to determine whether a human or machine should do a job.
In any case, not trying to take it too seriously; just enthusiastic to make sure that very *reasonable* arguments against the wasteful excesses of AI or PoW systems don't turn into general rules that would apply to the vast majority of automation of tasks humans want to have automated.

@josh @mcc this conversation has me a bit nerd-sniped thinking about transportation in this context. There's a graph that's gone viral a couple times comparing transportation modes on a (energy use) per (mass β€’ distance) metric [sorry, can't remember enough keywords to find it again quickly], the gist of which is "human on foot" is one of the more-efficient forms of transportation in either the animal or mechanical worlds, and "human on a bicycle" is the best on that metric by about an order of magnitude.

Of course, the question then is "what qualifies as a difference of kind rather than degree" -- is a rocket allowable under law 2 because humans can't get to space otherwise? Is a car, because "sustained 100 kph travel" isn't achievable by "human on a bicycle"? Is an LLM, because "write a plausible-sounding paragraph in 5 seconds" also is beyond human speed?

Does law 2 require the task to be achievable by a single human? (e.g. do we allow dump trucks even if they're less efficient than a few dozen humans with ropes and wheels?)

Does the energy source matter? A sailing ship captures renewable wind energy, but I'm not sure if it's more energy-efficient than "human in a canoe" or not. A horse uses renewable energy as well, but it's food is a rival good to human food, and I'm not sure it's efficiency is quite as high.

What technologies do we miss out on because the initial version was less efficient than a human but the refined version generations of development later crossed that threshold (not sure what the example is here ... ).

Given Andi's comment that these laws are for the designers, not the machines, I expect they're meant to be disambiguated socially rather than in some fixed decision tree. If I had to guess at the interpretation of a society that would make these laws: renewable energy (solar, wind, hydro) is essentially "free" if you can demonstrate that it wouldn't have negative externalities; animal labour isn't strictly under the law but I'd expect a social ethos of "only if the animals are treated humanely and have roughly human-equivalent or better caloric efficiency"; and while telecommunications tech may be fairly advanced I suspect they'd eschew air (and space) travel in favour of bicycles, sailboats, and draft animals.

KILLBOT 2.0 IS PREPARED TO DESTROY ALL HUMANS