Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Ethics of "I' Robot" 5

Status
Not open for further replies.

GwydionM

Programmer
Oct 4, 2002
742
GB
Everyone here must know that machines have not the slightest notion about whether humans are harmed or not. But supposing such things were possible, would Asimov's Three Laws be a sensible way to control them?

I'll use a 'Whitebox' to avoid spoilers for those who've not yet seen the film and might eventually watch it on television.
Code:
 [white]Obviously, one major flaw is shown up by Viki's understanding of First Law.  She may harm humans in the belief that she is preventing other humans from coming to harm.[/white]
For those who don't know Whiteboxes, you highlight them to see what's written. To write your own, you say code and then white, both in square brackets. then /white and /code to end it.

------------------
A view from the UK
 
That quote starts with "On a more cynical note,[...]". That should give you a clue as to how I am taking it.
 
There's an old saying that the shorter a contract, the more water-tight.

I was reminded while reading this thread of the Robocop movie, where they re-program Robo with all these rules.... help old ladies across the street, etc. etc. There were so many conflicts, that he couldn't cope with it. A person pulls into a fire lane and parks, and he shoots up the car sort of thing.

Asimov's three (plus 0) laws are good because they're short, absolute, and essentially water-tight.

You can bounce hundreds of what-if scenarios off of the three laws... but let's take it a step farther...

1.3 When several humans are at risk, give priority first to children, and then to women. After this, allow for chances of success and finally to the life expectency of the victims.

What if a woman is putting others at risk? Woman bank robber? Let's not be sexist; men *and* women *and* children can commit crimes.

1.4 A robot shall seek guidance from authorised humans in cases of potential threat to humans that may also be authorised. The most authorised humans shall have the last word.
Who decides who the most authorized human is? The robot? What if the president went nutso and was holding people hostage? Final word? What if the robot was ordered, by the most authorized (albeit nutso) human to hold the others hostage? Technically, the robot would be following the orders of the most authorized human with the final word, and by holding the others hostage itself, would be preventing additional harm to the hostages (the robot obviously wouldn't shoot one of the hostages or anything, but would be preventing harm by holding hostages itself, rather than allowing the most authorized human to hold them hostage).

Less rules, more water-tight. :)



Just my $0.02

"In order to start solving a problem, one must first identify its owner." --Me
--Greg
 
Asimov's three (plus 0) laws are good because they're short, absolute, and essentially water-tight.
Very tight. Especially since they are supposed to work with specific story lines.
1.4 A robot shall seek guidance from authorised humans in cases of potential threat to humans that may also be authorised. The most authorised humans shall have the last word.
This Laws' addendum will tend to confuse a good read. But, interestingly enough, in the later Foundations (Bear, Brin, Bensford) and in many of Asimovs, the robots themselves are the most authorized... ahem... robots. Humans are relegated to supporting cast roles. Real life? No. Good stories.
 
Don't forget one big flaw with the three laws as applied to real life.

The most likely venue for AI enabled robots to enter the world is the military. Obviouslly we're very far off, but likely industry won't be the driving due to cost restraints... so the military will likely foot the bill, and having these robots which aren't able to hurt humans will likely not be compatible with some of their goals.


Here's political flamebait given the upcoming election... would the 3 laws safe robot vote for Kerry or Bush?
 
Oop, missed a bunch of posts int he middle, guess the military considerations have been somewhat discussed... my apologies.
 
To take up the point about 'authorised'. It has to be who the human society thinks is authorised. You could put in something about democracy, but Hitler was elected. That's why I also put in limits about robots harming people.

Asimov's tales include the idea of robots hijacking the society for what the see as human benefit. I liked his writings but thought this rather a bad idea.

The laws as stated might allow any human to tell a robot be destructive - wreck your house or car, say, provided you were not there to protest. The limit is 'harm to humans', but that's very ambiguous.

------------------
A view from the UK
 
And if you're to bring the discussion to today, that's exactly where we stand.

Machinery will do whatever a human with the right access tells it to, whether that access be a key to a bulldozer, or the simple ability to pick up an uzi, those machines will destroy people and property.

I don't think the ethical question really comes into which orders will robots obey, I mean really, how much responsibility do you need to put on the creator? I think the ethical issues comes in when the question is about what decisions the robot will make by itself.

Which is why, I imagine, for years and years to come, any physically capable machinery will still use direct orders from humans... and the AI technology will be kept to analyzing and advising roles.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top