Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Ethics of "I' Robot" 5

Status
Not open for further replies.

GwydionM

Programmer
Oct 4, 2002
742
GB
Everyone here must know that machines have not the slightest notion about whether humans are harmed or not. But supposing such things were possible, would Asimov's Three Laws be a sensible way to control them?

I'll use a 'Whitebox' to avoid spoilers for those who've not yet seen the film and might eventually watch it on television.
Code:
 [white]Obviously, one major flaw is shown up by Viki's understanding of First Law.  She may harm humans in the belief that she is preventing other humans from coming to harm.[/white]
For those who don't know Whiteboxes, you highlight them to see what's written. To write your own, you say code and then white, both in square brackets. then /white and /code to end it.

------------------
A view from the UK
 
The advantage of human form robots falls under the fact that current tools and technology are made for humans to use
driving a car, using a screwdriver ect... yes other forms could do the same job. but it is eaiser to show a robot how to do something is he has the same apendages you do.


if it is to be it's up to me
 
Making a robot look human is simply to allow other humans feel easier around it and to make them feel easier about allowing it to do things.
Sort of like allowing people who are afraid of computers to use robots?

The advantage of human form robots falls under the fact that current tools and technology are made for humans to use
That's my point: humans have too many built in limitations, so they need tools. One such tool could be a robot. Now, to build those same physical limitations into a robot defeats the usefulness of the tool.

I have worked with many different robots in my profession. They do their job very well, and I don't see how making them look human will improve things.

If you need a tool to drive your car, heck, make a more efficient car -- the car is the tool. Teaching a robot how to drive a car is mostly entertaining, not the most useful thing to do with such a presumably billion dollar humaniform computer.
 

Dimandja,

Sort of like allowing people who are afraid of computers to use robots?

If you need a tool to drive your car, heck, make a more efficient car -- the car is the tool.

Teaching a robot how to drive a car is mostly entertaining ...


I like your arguments. Have a star.

 
But I do let a robot drive my car. Sure, I do the steering, braking, and accelerating but the robot under the bonnet takes my commands and uses them to judge how much fuel to inject, how hard to apply the brakes, which gear I should be in etc. Just because it doesn't take humanoid form doesn't stop it being a robot.
Where Asimov comes in is when technology has progressed far enough to produce robots which have to make ethical decisions. My car is incapable of knowing whether it is harming a human being. Roll on the positronic brain

Columb Healy
Living with a seeker after the truth is infinitely preferable to living with one who thinks they've found it.
 
columb said:
But I do let a robot drive my car.
Exactly. Your car is already an automaton (albeit an 'imperfect' one) -- there is no need to put another robot on top of an existing one.

columb said:
My car is incapable of knowing whether it is harming a human being.
Psst... get a collision detector. There are whole agencies created for the sole purpose of making sure our tools are safe for humans; but you knew that, right? Even the food we eat is closely (i hope) watched for safety. All those things are being done without super duper multi thinking positronic gizmos.

If you think we need extra special protection from humanoids, I'll show you what havoc a square automatic guided vehicle can wreak on the floor of a manufacturing plant.

The robot world Asimov created is just that: fiction. It's an intellectual exercise. The real world is very much different, and no, we are not preparing a world where robots are our personal servants, however appealing owning slaves is to some. Instead we are tailoring tools to our specific needs. A car, a CD player, a hearing aid, a cell phone, a battlefield tank, an undersea rover, etc, will not benefit us by taking human form, nor by learning to drive our cars.

A tool that is made to look and feel like a human will end up needing other tools -- just like humans. Well, if your robot can fly me to the moon as it is doing my windows and preparing my taxes, I am open to discussion.
 
...technology has progressed far enough to produce robots which have to make ethical decisions.

Don't think it's ever going to happen in reality. What humans are going to do when even ethical tasks will be given away to robots? Robots, even looking human, are advanced computers and are not creatures of ethics by definition. All the ethics they could possibly handle is a set of rules programmed by humans. (And these rules vastly differ between different groups of humans, too.) A minor flaw in programming will result in a major flaw of an ethical decision.

I mean, don't people create robots to unload their physical, "dangerous, unpleasant, or otherwise hard" duties on them, and to free up themselves for creative and ethical tasks? Although if you consider some ethical tasks "dangerous, unpleasant, or otherwise hard", then, of course...

If you assign an unpleasant task of firing employees on your list to a robot, it's not an ethical decision made by robot. But I don't think you (a human being) would assign a robot to create this list. Or would you? Why?
 
My appologies, I was being inprecise. Asimov's robots don't make ethical decisions. They use the ethical decisions which have been programmed into them to make decisions on actions. If you decide to fire employees on a 'last in - first out' basis and then use the company computer to determine which employees were 'last in' then the computer (read robot) is making the decision as to which employees to fire. The programmer (you) are making the ethical decision.
The underlying ethics of the three laws is that using robots to kill humans is wrong. In the real world.......

Columb Healy
Living with a seeker after the truth is infinitely preferable to living with one who thinks they've found it.
 

Columb,

Wouldn't you want to check the list before the deed is done? Your best performer could be the last in, and this exception not always can be factored in the program.

But I want to say a few words about the programmers making ethical decisions. If you mean by that humans in general, I agree with you. That's what I was talking about. The ethical decisions are made by humans. The robots are taking instructions.

But if you mean that ethical decisions are made by programmers as a profession, I would like to differ.

I got to use this example now.

Years ago, for my diploma project, I made a database system which helped doctors to select best candidates for an organ transplant. Those people were all on waiting list and had different urgency level, besides all other physical/immune parameters and health conditions. People could be more or less compatible with an organ, which makes chances of the organ be accepted by the body higher; or incompatible at all, which can present life-threatening complications. Even the most compatible person wouldn't be operated during some other acute health problems, like flu, etc.

When organs suitable for transplantation come along, the decision should be made very fast, as organs have very limited life span while they are still usable, and some time is required to select and prepare for the surgery the right person from the list. What my system did is helped doctors to speed the task of selecting a list of suitable people for this organ, in order of decreasing compatibility, by multiple criteria.

Did I, as a programmer, perform an ethical task? No (creative, yes). I maintain this, and my instructor/supervisor maintained that back then when I asked ethical questions about my work. Who did the ethical decisions, then? The end users – the doctors did. Even the doctors with whom I closely worked to create a comprehensive set of rules for this system (robot?) and who admitted that the system did what they needed and of tremendous help to them, didn't use the resulting list as-is. The task of reviewing each and every patient for final selection, the ethical task, was on them.

But in the end, yeas, all creative, ethical and expert tasks were left to the humans. The robot, e.g. the program with the computer, was just the tool.

 
stella
I really need to read my posts more carefully or posibly think more clearly.
Yes, what I meant is that humans make the ethical decisions. We programmers are simply following orders when we transfer those decisions into code. There are some ethical decisions as to what code we would write but in the normal course of events I'm not going to come across many, if any, if those.

What gives Asimov's robot stories their edge is that the robots have to make complex decisions on actions based on simple, if powerful, ethical decisions programmed into them. For example the ethical decision that harming humans is wrong has to converted to a decision when choosing between two courses of action each of which may harm humans. This leads to some good fiction and I like robot stories best of all his work.

Back in the real world my immediate feeling is that technology will never develop robots as described by Asimov. The 'do anything' humanoid shaped robot will always be less efficient than the custom built version and the numerous robots we use today are all designed to do one job and one job only. As such it is easier for the designers, or whomever makes the ethical decisons, to cover the range of decisions required and to pass back to humans those that are beyond it.

On a more cynical note, were Asimov style robots ever produced outside of the utopian society as described in the books then the first application would be to harm humans. The military would take one look and say 'we need lots of those, but without the first law'.

Columb Healy
Living with a seeker after the truth is infinitely preferable to living with one who thinks they've found it.
 
A robot that will obey anyone's orders is highly dangerous if it is also allowed to harm humans. I don't think even the military would want them - how is the robot supposed to know who is authorised? And would politicians let it be funded, given the risk of a coup?

In one Asimov story, there is a character who plans to conquer the universe using robotic spacecraft, which would fail to recognise humans as humans. One of various possible loopholes.

Anyway, I'd say that it was Asimov who was being realistic. Distopians are mostly no more realistic than utopians, maybe less so. Most of the dream-technology of Wells' work has been realised, along with some extras. Most of the social evils that he wanted to fix with a dictatorial world-state have been cured more gently.

Our biggest problem currenly in the West is excessive self-indulgence from a surplus of comsumer goods. A new problem and one we are just gradually overcoming.

A distopian is someone who suffers from the future instead of enjoying it.

------------------
A view from the UK
 
A robot that will obey anyone's orders is highly dangerous if it is also allowed to harm humans. I don't think even the military would want them

And I thought harming humans was the whole point of the military... Silly me.

Anyway, I'd say that it was Asimov who was being realistic.

Actually no. Asimov himself recognized he was engaged in speculative writings. If you want realistic science fiction, try Stanley Clarke.
 
The primary purpose of any military is to kill people and break things.

However, implicit in that definition is that a military kill selected people and break selected things.

No military in the world would have any use for a robot that could be turned on its creators.


Want the best answers? Ask the best questions!

TANSTAAFL!!
 
sleipnir said:
No military in the world would have any use for a robot that could be turned on its creators.

Weapons supplied to Iraq by the US were routinely used by Iraqis to shoot at americans. You guys have an overdrawn view of robots. Remember that a robot is only a glamorized computer. Humanoid robots are at their best at birthday parties. The most efficient robots I have worked with have absolutely no need for inefficient human features.

We are working with robots this very minute. We have safety features in place to prevent injuries -- yes, to humans. We also try to keep our robots from hurting each other. They are too expensive to have to reprogram or rebuild. Whatever asimovian robot world you have in mind, bears no present nor future resemblance to anything practical.

Again, Asimov's robots do not approach reality at all. Those robot-rich worlds were a device to explore human dramas. Robots are already here, and are doing swimmingly well, thank you, without the so called laws.
 
Now, seriously, let's take a look at the practicality of an asimovian robot on the battlefield.

As evidenced by the writer himself, those robots all have their achilles heel. Now imagine unleashing an independently minded machine against a nation that has the capability of tweaking with its inner workings.

A smart commander would demand assured control of his weapons. Enemy interference is always a concern. Having the enemy divert one or two guided missiles is one thing; but taking control of just one of these super duper asimovian robots would be a huge disaster.

By virtue of its own implied capabilities, an asimovian robot would not be a very good choice on the battlefield.
 
Dimandja said:
We have safety features in place to prevent injuries -- yes, to humans.

That sounds like the First Law to me.

And do the robots you work with have settings built into them that will, under normal circumstances, keep the robot from damaging itself? If so, you have a nascient Third Law.

The Second Law will deal as much with computer network security as it will with taking orders.

An Asmovian robot doesn't have to be an android. Asimov, himself, explored this in some of his short stories when he dealt with the so-called Frankenstein Syndrome, where humans would not accept a humaniform robot. The solution, as I recall, was to create small, extremely special-purpose robots.

And I think that the U.S. military has decided there is a use for robots in the battlefield -- just not as weapons. That is, after all, the purpose of the DARPA Grand Challenge -- to create a ground-running robot capable of navigating itself from point-to-point. There are lots of military applications where such a thing could be useful, particularly in areas where contamination makes it difficult if not impossible for humans to work.

Were I a military commander, I surely wouldn't want such a thing to be armed. But some form of Asimov's thoughts on the subject would still be useful.


Want the best answers? Ask the best questions!

TANSTAAFL!!
 
Dimandja said:
We have safety features in place to prevent injuries -- yes, to humans.
Isn't that the crux of the First Law?
Dimandja said:
We also try to keep our robots from hurting each other.
Isn't that the essence of the Second Law?

I understand that you are not intentionally or consciously trying to apply these laws, or to be influenced in any way by the Laws, but it sounds to me like there is a lot more resemblance than you think.

Good Luck
--------------
To get the most from your Tek-Tips experience, please read FAQ181-2886
As a circle of light increases so does the circumference of darkness around it. - Albert Einstein
 
sleipnir said:
And do the robots you work with have settings built into them that will, under normal circumstances, keep the robot from damaging itself? If so, you have a nascient Third Law.
and...
CajunCenturion said:
I understand that you are not intentionally or consciously trying to apply these laws, or to be influenced in any way by the Laws, but it sounds to me like there is a lot more resemblance than you think.
Of course you can find glimpses of any laws everywhere you look. The following quote addresses the real human approach to all this.
columb said:
On a more cynical note, were Asimov style robots ever produced outside of the utopian society as described in the books then the first application would be to harm humans. The military would take one look and say 'we need lots of those, but without the first law'.
 
That is certainly one valid opinion, but not necessarily the opinion of all. I don't believe that harming humans would be the first application of robotics. I'm not sure that we'd ever see that as a front line (no pun intended) function for robots.

If you study the types of robotic endeavors currently under development, at least here for the US Military, you'll find that most of them are for special purpose functions, such as the Great Race, Minesweeping, and other type tasks, many logistical by nature. Even the UAV's currently in use, which do have ordinance delivery capabilities require human interaction to exercise that function. Just as is the case with many EOD units today, both civilian and military, the robot is put in harms way to protect the human element.

Good Luck
--------------
To get the most from your Tek-Tips experience, please read FAQ181-2886
As a circle of light increases so does the circumference of darkness around it. - Albert Einstein
 
CC said:
If you study the types of robotic endeavors currently under development, at least here for the US Military, you'll find that most of them are for special purpose functions, such as the Great Race, Minesweeping, and other type tasks, many logistical by nature.
I made this point early on. I think we are beginning to repeat ourselves. An indication that we have reached the end of this thread?...

 
I though you had said that Dimandja, and to that point we are in agreement.

But when you quoted columb stating it to be the "... addresses the real human approach to all this.", I was a bit confused because I didn't think that it was either the approach that you are taking, nor that it was your feeling with respect to the application of robotics.

Good Luck
--------------
To get the most from your Tek-Tips experience, please read FAQ181-2886
As a circle of light increases so does the circumference of darkness around it. - Albert Einstein
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top