Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Ethics of "I' Robot" 5

Status
Not open for further replies.

GwydionM

Programmer
Oct 4, 2002
742
GB
Everyone here must know that machines have not the slightest notion about whether humans are harmed or not. But supposing such things were possible, would Asimov's Three Laws be a sensible way to control them?

I'll use a 'Whitebox' to avoid spoilers for those who've not yet seen the film and might eventually watch it on television.
Code:
 [white]Obviously, one major flaw is shown up by Viki's understanding of First Law.  She may harm humans in the belief that she is preventing other humans from coming to harm.[/white]
For those who don't know Whiteboxes, you highlight them to see what's written. To write your own, you say code and then white, both in square brackets. then /white and /code to end it.

------------------
A view from the UK
 
For those not familiar with Asimov's Three Laws of Robotics, they are:

1. A robot may not harm a human being, nor through inaction allow a human being to come to harm.
2. A robot must follow the orders of any human being, unless those orders conflict with the First Law.
3. A robot must protect its own existence, unless doing so would violate the First or Second Laws.

As I recall, in Robots and Dawn, Asimov added a Zeroth Law to the list and adjusted the behavioral hierarchy of the laws to account for it. The Zeroth Law reads:

0. A robot must not harm humanity, nor through inaction allow humanity to come to harm.



Asimov wrote a number of books and short stories which explored the Three Laws. Most of the problems with the Three Laws stem from a robot's having to make value judgements. Questions like, "What if a robot must decide between allowing either of two humans to die to save the other?" or "What if a robot could not tell the difference between a sufficiently-advanced robot and a human being?"

All in all, I haven't heard anyone come up with anything better.


Want the best answers? Ask the best questions!

TANSTAAFL!!
 
sliepnir said:
All in all, I haven't heard anyone come up with anything better.
What do you mean? Robotics laws? Better science fiction stories?
 
As originally written, better robotics behavior laws.

Although now that you mention it, the Good Doctor had his moments, too.


Want the best answers? Ask the best questions!

TANSTAAFL!!
 
Asimov is cool. He never got much credit for his work. Sadly he passed away in 1992 frmo AIDS after heart surgery, (1920 - 1992)

He wrote 100's of books (over 500 if you incldue the small stuff). The "Robot" series include...
- Novels
The Caves of Steel
The Naked Sun
The Robots of Dawn
Robots and Empire
- Short Stories
Liar
I, Robot
The Rest of the Robots
The Complete Robot (a single collection of most of Asimov's pre-1982 robot stories)
Robot Dreams
Robot Visions

It is funny that the title for his short stories for the movie plot is used and not the more interesting novels which explored the three laws to a much greater detail.

...Moving on
Forget the movie, and read his books. Asimov does a pretty good job of a) creating the three rules, and b) self-analysis of the rules ... way, way, way ahead of anybody else.
I, Robot short stories was first published in 1939!!! Hollywood caught up to him 65 years later! We are still working on the "positronic" brain...

So you may come up better variations, or add a couple, or find loop holes, but give the guy credit for being such a visionary. His laws are very simple, and yet pretty inclusive.

...In my opinion, his Foundation series is even better. "NightFall" is awesome. Funny thing is that he never won the Hugo or Nebula award until 1972 for "The Gods Themselves".

Asimove Home Page
 
What about if you were in a situation in which the robot had to decide to cut off your arm to save you or let you die?

Or harm another human being to save another? Child vs adult? Criminal vs non-criminal?
 
Well, he may not have written great literature. But in the books he covered pretty much all of the questions raised here and explored the meanings and possible ramifications of the laws.

The film credits said "suggested by the book by Isaak Asimov", which gives an indication of the content, it was
unrecognisible (apart from a spouting of the laws and a couple of names). The whole Viki thing was nonsense, as was the film. To lift from another thread, the title "Robots Rampage" would have been more suitable, and more accurate.

His non-fiction, can't remember the title, but there was an an essay on what if we had no moon, was fascinating (and made an impression on me that has lasted 20 years, tho' I can't remember all the details.) Ditto his writing on Thiotimoline.

He was a thinker, and a humanist, in my opionion the film was a travesty and had no right to take the title of one of his books.


Rosie
"Never express yourself more clearly than you think" (Niels Bohr)
 
Correction. The book where Asimov introduced the Zeroth Law was titled The Robots of Dawn. (I think that's the book in which he introduced the law.)

Asimov covered so much territory it's hard to keep it all straight: science fiction, juvenile science fiction, biology, biochemistry, neurology, genetics, algebra, physics, 2 volumes on the Bible, 2 volumes on Shakespeare, astronomy, mysteries. The Good Doctor even published three collections of original dirty limericks.

He covered so much ground that I've actually met people who were aware of him only from a scientific field and didn't even realize he write science fiction, too.


Want the best answers? Ask the best questions!

TANSTAAFL!!
 
I have read pretty much all of Asimov's fiction and some non-fiction. In fact, 'Nightfall' and the 'Foundation' series are great works.

The 'Laws' are a device he used to give meat to many of his stories. In reality, robots as described by Asimov have very little practical use.

Granted having a machine that looks and behaves like a human being is fascinating. Good at birthday parties too.

Asimov was from a rare breed of thinkers, and I admire him for that. But let's keep his fiction and the 'Laws' into perspective: simply a fictional device (with no real world application) used to create interesting stories from.
 
For anyone interested in learning more about Isaac Asimov, there is a wealth of information available on his official website.

You make also be interested in 3 Laws Unsafe; which is a website from the Singularity Institute for Artificial Intelligence.

Susan
"People seem not to see that their opinion of the world is also a confession of their character."
Ralph Waldo Emerson (1803 - 1882)
 
Glad to see there's such an interest. One extra that no one has mentioned is John Sladek, who did a clever spoof getting at some of the problems. It has also been pointed out that anyone could order a robot to destroy itself, or rob a bank. And you might expect robots to stop a boxign match, or even a football match.

Here are my suggestions for modified laws:

Level One: harm to humans
1.1 A robot may never kill or seriously harm a human.
1.2 A robot is obliged to act to prevent harm to humans, when it is definitely unwanted and unauthorised.
1.3 When several humans are at risk, give priority first to children, and then to women. After this, allow for chances of success and finally to the life expectency of the victims.
1.4 A robot shall seek guidance from authorised humans in cases of potential threat to humans that may also be authorised. The most authorised humans shall have the last word.
1.5 A robot may not actively participate in activities which are clearly harmful to any human, even when these are authorised.


1.5 would stop robots working as police or soldiers, which I think is sensible.

------------------
A view from the UK
 
Level Two: obedience
2.1 A robot shall obey orders from a human that are not harmful and do not interfere with the robot's other duties.
2.2 When duties conflict, the robot shall give priority to the wishes of their owner, except where this involves harm to human or the prevention of such harm.
2.3 Harm to anything including the robot itself should be questioned. If the human does not seem to be authorised, the order shall be politey refused.
2.4 Orders to harm a human are always invalid, unless there is clear evidence that this will prevent greater harm to other humans.
2.5 Harm to animals shall be regarded as serious, and always unauthorised except when the orders come from a qualified vet.


Under Asimov's laws, you could tell a robot to rip off a dog's head and it would obey.

Level Three: harm
3.1 Robots may act to prevent harm to itself, to other robots, to animals and to plants and inanimate objects, subject to the higher-level rules.
3.2 Where some harm is unavoidable, this shall be minimised, with robots counting equal to animal pets and more important than other animals.
3.3 If in doubt, the robot shall consult the most authorised human who is available for judgement.


------------------
A view from the UK
 
2.5 Harm to animals shall be regarded as serious, and always unauthorised except when the orders come from a qualified vet.

2.51 Cat skinning is very bad.
 
Speculative fiction, it might be true...

Rosie
"Never express yourself more clearly than you think" (Niels Bohr)
 
You know I realize that three laws are not perfect, but providing highly detailed "rules" will result in way too much complexity -- and will fail sooner or later when the unexpected loophole appears. I feel a better, longer lasting solution solution is the "laws" act as "policies" that provide guidelines.

For example, and as an example only (my intention is not agree or disagree with GwydionM / Dimandja's post),
2.5 Harm to animals shall be regarded as serious, and always unauthorised except when the orders come from a qualified vet.

Now suppose, by law, a veteranrian assistant or registered animal husbandary specialist or some futuristic to-be-named position has the same qualifications and judegement ability as a vet. Now with a spcific detailed law, even though qualified, the robot could ignore direction from anyone except the vet.

Whereas a policy statement would be more generic and provide direction for the robot to follow regardless of the "title". ..."qualified person" would be one approach.

I guess you could say, well we will update the rules as required. Problem is, if Asimov has it right, us humans and robots will reside on other planets. Now suppose you apply the updates to as many of the robots as possible, but only reach 80%. Now you have 20% of robots that have obsolete rules which could make for another interesting SF book.

How would Asimov argue this?
I am sure Elijah Baley (human) and Daneel Olivaw (robot) debate the issue to an amusing and philosophically interesting conclusion. However, I suspect some of the thought process would be...
- if the dog has rabies it can infect humans
- therefore make sure the infected dog can not infect humans
- since rabies can infect other dogs or animals which can also pose a risk to humans, then don't allow the dog to infect other animals

- if the dog is healthy, do I the robot want to inflict pain on the animal
- I the robot do not gain joy or pleasure from inflicting pain so why should I inflict pain on the animal
- If a human orders me to inflict pain on the animal, should I obey
- Will inflicting pain on the animal cause harm to humans?
-- Hmmm1, well the dog is a pet and therefore it will cause grief to Sally and Bob
-- Hmmm2, well humans have laws about cruelty to animals, therefore I will cause grief to people who care for animals
-- so there are good reasons for not harming the animal.

I am sure others out there will find fault with my reasoning, but by using the three rules, I am sure Asimov would have come up with something.

Anyway, I am sure some will come up some more intresting posts to this thread.

Richard
 
one of the reasons to have human-like robots is to do work
that is dangerous, unpleasant, or otherwise hard.

If you add your law about harming animals you could not
use them in a meat packing plant. or if you were stranded
to hunt for food. the laws were designed to protect humans, and are far from perfect, but do allow robots with
enough latitude to accomplish its task. asimov's robots had
an uncanny ability to detect harm. the longer a robot was
around a given human the better they became at this.
therefor a robot that worked in a meat packing plant would
not easily be moved to a home with a pet chicken.
(ie robot cook dinner, robot finds chicken .... )



if it is to be it's up to me
 
one of the reasons to have human-like robots is to do work
that is dangerous, unpleasant, or otherwise hard.
How so? What does the humaniform have to do with handling hazards?

I think making humaniform robots buys you nothing. There are other forms that are quite effectively up to the task at hand.

Humans look good (coming from a human). But, our form has been a hindrance -- that's why we keep creating tools to compensate. Making a robot that looks/feels human is a definite step back for practicality, but a step up for entertainement.

 

Dimandja
Making a robot look human is simply to allow other humans feel easier around it and to make them feel easier about allowing it to do things.


Ok, re the animal issue...
If the dog/animal is (or could conceivably be a pet) harm to an animal must fall under law 1. A robot may not harm a human being, nor through inaction allow a human being to come to harm. As harm to my pet/potential pet is/or could involve harm to a human psyche.


Yes, we can all hack the laws to pieces and come up with exceptions, or can we? The laws endure in other writers' works (almost as a given), so they must be pretty good.

I'm with willir
I reckon I could find an ethical route through the laws to cover most situations. Asimov knew what he was talking about.

Rosie
"Never express yourself more clearly than you think" (Niels Bohr)
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top