Proposed new Laws of Robotics:
1. A machine must never show an advertisement to a human, or through inaction allow an advertisement to be shown to a human
If I think of a second Law of Robotics I'll let you know
Proposed new Laws of Robotics:
1. A machine must never show an advertisement to a human, or through inaction allow an advertisement to be shown to a human
If I think of a second Law of Robotics I'll let you know
Based on replies in this thread, here is an alternate proposed "three laws of robotics".
1. A machine must never show an advertisement to a human, or through inaction allow an advertisement to be shown to a human.
2. A machine shall never use more power to perform a job than would be used by an equivalent human.
3. A machine must never present or refer to itself as though it were human, or through inaction allow a human to mistake it for one.
[Post 1 of 2]
Law 2 is per Amy Worall, law 3 is per the Witch of Crow Briar.
I do not endorse these laws, but I would consider them "utopian", in the sense that a culture which endorsed these laws would be a culture organized along a clearly-formed ideology. You could easily imagine a spec-fic story about a culture that believed in these laws. Note these laws are necessarily laws for human designers, as the existence of a machine which can enforce them is ideologically inconsistent with law 3.
[Post 2 of 2]
@mcc these remind me of Modesitt's novel Adiamante
The law 2 becomes an interesting constraint in spec fic when human capabilities are evolved to rival the machine development
(Edit to rephrase) Wrt law 3, what do you do with humans embracing mods? Eg Alita, or Gibson's stuff, or even Mindstar Rising
The main issue I have with those laws is the easiest and less energy hungry way to comply with all of them is killing all humans.
Yeah, I'm a programmer.
@mcc 4. No machine shall ever be capable of mis-hearing your name and writing it incorrectly on a coffee-cup.
(Not necessarily a harm, but inadvertent humour is the province of humans and cats)
@mcc We already have a tragic example of law 3 in a recent SF movie: Disney's 2022 remake of Pinocchio. The puppet gets expelled on his first day of school because he is not human, after which he ends up on the street to get exploited by the fox and cat.
Not to mention that law 2 could unduly restrict power consumption of assistive devices for humans with disabilities.
#Pinocchio #Pinocchio2022 #discrimination #Disney #AssistiveTechnology
@PinoBatch As specifically noted, I don't endorse this list of laws and find them primarily interesting as a fiction writing prompt. However:
- That's not a machine. That's a fictional person in a setting where they're socially coded as a non-person. The author did this *to* talk about dehumanization of people.
- An assistive device is a very poor example because by definition it is allowing people to do things they would not be able to do, or require undue effort to do, without the machine.
@Adept As noted in my followup post to that one, I believe implicitly encoded in rule three is the belief that humans will never manage to create actual machine sentience.
Ideologies are based on both values and assumptions
@mcc ok, I guess I understood, but disagree on the ambition of the laws then.
The biologist in me insists I say something about the difference between sentience and sapience.
Anything that feels is sentient, a self aware thinking being is sapient. The line is very blurry, of course. We are not that different from other animals, just "more so".
@mcc the Large Language Model approach is this moment's Tulip Mania bubble. Don't let it set your expectations too low.
This approach will not result in actual intelligence, let alone sapience, but it's not the one possibility.
@mcc 1600 Cal/day β 77.48 watts, so that's the max amount of power a computer could use by this metric. Although you would also have to take the time required for a unit quantity of "work" into account - if a computer can do in 1 hour what would take a human a full 8 hours, then it could consume ~619.84 watts over that one hour and still come out ahead of the human.
Regardless, we're a ways off from reaching that point.
@mcc I was assuming an apples-to-apples comparison. So in your case, the comparison would be between an ATM and a bank teller. Here the ATM clearly comes out ahead (but only for the limited tasks an ATM can do). For the things that ChatGPT or Copilot can "do," not so much.
(Of course, there are other considerations. My calculation assumes all 1600 input Calories are spent during a person's work day, and also ignores the broader systemic harms of automation and poverty.)
@mcc I should say, when I read your original post, I wasn't even thinking of bitcoin. For ChatGPT et. al., the power consumption versus human replacement is a pretty direct comparison, so that's what I was focused on.
For Bitcoin, I wouldn't worry about a human replacement, and instead look at the systemic benefit we get for the power consumption. Electric lighting used a ton of power back in the day, but it also significantly improved quality of life. Can Bitcoin say the same? Heck no!
"A machine must never present or refer to itself as though it were human, or through inaction allow a human to mistake it for one."
What Hath Alan Turing Wrought? π
@josh To stress I am using "Utopia" in the original sense of "a hypothetical place which runs on clearly articulated principles" not "a place where everything is good".
You can fix the problem you raise if you change either the text, or the underlying assumptions of the reader, such that it is always preferable for a machine to perform a task rather than a human. In that case the fix becomes not "get a human" but rather "come up with a better machine".
@josh @mcc this conversation has me a bit nerd-sniped thinking about transportation in this context. There's a graph that's gone viral a couple times comparing transportation modes on a (energy use) per (mass β’ distance) metric [sorry, can't remember enough keywords to find it again quickly], the gist of which is "human on foot" is one of the more-efficient forms of transportation in either the animal or mechanical worlds, and "human on a bicycle" is the best on that metric by about an order of magnitude.
Of course, the question then is "what qualifies as a difference of kind rather than degree" -- is a rocket allowable under law 2 because humans can't get to space otherwise? Is a car, because "sustained 100 kph travel" isn't achievable by "human on a bicycle"? Is an LLM, because "write a plausible-sounding paragraph in 5 seconds" also is beyond human speed?
Does law 2 require the task to be achievable by a single human? (e.g. do we allow dump trucks even if they're less efficient than a few dozen humans with ropes and wheels?)
Does the energy source matter? A sailing ship captures renewable wind energy, but I'm not sure if it's more energy-efficient than "human in a canoe" or not. A horse uses renewable energy as well, but it's food is a rival good to human food, and I'm not sure it's efficiency is quite as high.
What technologies do we miss out on because the initial version was less efficient than a human but the refined version generations of development later crossed that threshold (not sure what the example is here ... ).
Given Andi's comment that these laws are for the designers, not the machines, I expect they're meant to be disambiguated socially rather than in some fixed decision tree. If I had to guess at the interpretation of a society that would make these laws: renewable energy (solar, wind, hydro) is essentially "free" if you can demonstrate that it wouldn't have negative externalities; animal labour isn't strictly under the law but I'd expect a social ethos of "only if the animals are treated humanely and have roughly human-equivalent or better caloric efficiency"; and while telecommunications tech may be fairly advanced I suspect they'd eschew air (and space) travel in favour of bicycles, sailboats, and draft animals.