Print 37 comment(s) - last by rampholeon.. on Jan 4 at 2:13 PM

Soldier launching UAV  (Source: Sgt. 1st Class Michael Guillory, U.S. Army)
Robots with ethics could one day be used on the battlefield

The United States military continues to invest heavily into robotic technology, as the newer generations of robot-based soldiers will be programmed to understand battlefield ethics.

According to an article in the Army Times, the so-called 'ethical robots' would follow international laws.  Ronald Arkin, from the Mobile Robot Laboratory at the Georgia Institute of Technology, wrote a book to discuss the future of robotics.

In "Governing Lethal Behavior in Autonomous Robots," Arkin claims robotics, if programmed correctly, have numerous advantages over human ground troops.  Robots are emotionless , expendable, and can be customized for specific missions.  

It's possible the robots could be taught remorse, compassion and guilt, but exact senses the robots would be programmed with are still unknown.  Furthermore, depending on the determined level of guilt, and the mission being carried out, the firepower and effectiveness of weapons used will change.

The robots could also be used to monitor soldiers to ensure international treaties are being followed by U.S. and coalition ground troops.  Although many soldiers don't want to be monitored in such an intrusive manner, several high-profile cases of abuse and murder have further blemished the military's image among locals in Iraq.  

If funding is properly allocated for the research, it could be available in 10 to 20 years.  As the U.S. continues to fight wars using enhanced technology, unmanned aerial vehicles (UAVs) and other unmanned resources have become popular alternatives to launching manned missions -- and is expected to further increase in the future.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

Oh #$*$!
By Frallan on 12/30/2009 7:19:45 AM , Rating: 2
As long as the robots are not true robots but demand a human decision when to pull the trigger.

Asimov had the right idea regarding robots. The three robotic laws should adhered to or we will have troubles in the future with SW/HW glitches that activates the wrong respons from robots at the wrong time in the wrong place.

Inspite of the tone in my post I wish you all a Happy new year.


RE: Oh #$*$!
By TSS on 12/30/2009 8:10:36 AM , Rating: 2
I belive I, Robot showed that even if you follow the 3 laws, it's still possible to enslave mankind.

This article is very interesting though, consider this: It's about a robot with Ethics, and robots are emotionless.

Aren't ethics based on emotions?

They certainly aren't based on logic. For instance, there are so many ways to brutally murder or maim somebody using a bullit in a opera of pain. Yet, it is more ethical to shoot somebody then to use white phosphor. Logic can not explain that since in both cases, people end up dead (and efficiently so) and logic tells me that's what went wrong. That they died, not how they died. If they where supposed to die because they where your enemy in a war, then what's the problem?

Now, how can robots decide what's right and what's wrong, ethics, without the tools to make such decisions, emotions? We can't tell them what to do in every possible situation.

Here's what'll happen: logic dictates it's ethical to kill any non-ethical creatures, as they show no respect for life thus are a threat to life thus do not deserve life. If a human's standard of ethics is lower then the robot's standard of ethics, the human deserves to die as it's a threat to society and ethics in general. The preverbial "if you could kill hitler before WWII would you?".

But not to worry. With current robot technology as it is, it'll be decades before the first decent attempt at an AI will be made. Not that i don't belive in the advances of science, but it's simply because people in the '50s also thought we'd have flying cars and space hotels and a lunar colony by now. I'm not getting my hopes up :P

Happy new year and a good decade to you too sir ^^

RE: Oh #$*$!
By Moishe on 12/30/2009 9:56:13 AM , Rating: 2
You're right... getting a robot that doesn't rely on a human controller is very far away.

RE: Oh #$*$!
By DEVGRU on 12/30/09, Rating: 0
RE: Oh #$*$!
By Don Tonino on 12/30/2009 12:38:45 PM , Rating: 2
Aren't ethics based on emotions?

I'd say that a succesfull ethics MUST not be based on emotions.

RE: Oh #$*$!
By Shadowself on 12/30/2009 1:12:43 PM , Rating: 1
I, Robot, the movie, was virtually nothing like the collection of short stories by Azimov. The stories grapple with the implications of each law including one about a "robot" that gets the ability to be ethical and emotional and kills itself because of internal conflicts amongst the three laws.

To eliminate the problem posed in the recent movie of the same name it has been long known that there needs to be a "Zeroth Law": "A robot shall not, through its action or inaction, interfere with a human being's control over that same human being's actions with regard to that same human being."

Sometimes it has been called the the "self determination" law, i.e., a robot cannot interfere with a person's right of self determination. It can interfere to save the life/health/etc. when one person tries to harm another, but cannot interfere when a person tries to harm himself or do anything else the robot might consider adverse to that human.

The original laws are then modified to say,

"A robot may not injure a human being or, through inaction, allow a human being to come to harm, except where such action or inaction would conflict with the Zeroth Law..
"A robot must obey any orders given to it by human beings, except where such orders would conflict with the Zeroth or First Law.
"A robot must protect its own existence as long as such protection does not conflict with the Zeroth, First or Second Law."

A clear example is, as the original three laws stand a robot must do everything in its power to keep a person from committing suicide. The addition of the Zeroth Law would allow a person to commit suicide and the robot do nothing to stop it. It's simple self determination.

Personally, I strongly support implementing all four laws in any autonomous robotic system. I see no reason to not implement the four laws and implement instead an "ethics engine" of some sort. Sticking with the four laws would obviate any possibility of a robot killing because it "thought" it had a higher level of ethics than you did.

RE: Oh #$*$!
By Fritzr on 12/30/2009 4:06:12 PM , Rating: 2
Asimov did add a Zeroeth Law in his later books where he merged the Foundation universe with his Robots universe. In the trilogy telling the story of the creation of the Foundations, Robots ARE allowed to kill individuals in order to prevent harm to humanity as a whole

Isaac Asimov's Zeroth Law of Robotics: A robot may not harm humanity, or through inaction allow humanity to come to harm.

In a wartime situation this law would be extemely dangerous for the command structure as the robot could decide that a successful mission would cause greater harm to humanity and that assinating its own chain of command would be the greater good. This theme is what the movie I-Robot explored by having the robots decide that preventing humans from harming themselves or others justified occasional harm to a few.
The thought came to us from Caiaphas, the High Priest mentioned in the Gospel of John. In John 11:49-50 the Apostle John wrote, "And one of them, named Caiaphas, being the high priest that same year, said unto them, Ye know nothing at all, Nor consider that it is expedient for us, that one man should die for the people, and that the whole nation perish not."

RE: Oh #$*$!
By Lerianis on 12/31/2009 1:10:56 AM , Rating: 2
Fact is that most times, the robot would be right.... VIKI in the film I, Robot was actually quite SANE when she said that 'the ends justify the means'..... that's the problem.

Some very sane people (hyper-sane as some call them) come to the conclusion that they have the right to dictate other people's lives...... which is a problem that we have with the religioloonies using 'god' to justify that and the socialists using 'the good of mankind' to justify that.

RE: Oh #$*$!
By Lerianis on 12/31/2009 1:00:18 AM , Rating: 2
Actually, ethics is NOT based on emotions.

Ethics is based on simple rules that humans make complicated with their emotions.

1. Do not kill unless it is your last choice between you and death from another person killing you or someone else directly.
2. Directly means exactly that: pointing a gun at you and getting ready to pull the trigger or waving a knife at you/trying to stab you, or something equivalent in immediacy to that.

If everyone would adhere to those two rules, and stop bringing their 'god' myth into this and realize that we SHOULD NOT KILL OTHER HUMANS PERIOD (not even by 'executions', which only demean the society who does them).... things would be a lot better.

RE: Oh #$*$!
By nafhan on 12/30/2009 10:03:20 AM , Rating: 3
I'm of the opinion that the "Three Laws of Robotics" are a little too vague and simplistic to be used for anything more than a programming guideline. Go ahead and define a human, how to identify one, and - most importantly - what "injuring" or harming a human (especially through inaction) means.
Anyway, the three laws will never be implemented even if they could. There will be at least one country that won't follow the laws with their robots, and if someone does it then everyone who doesn't want to be at a huge disadvantage will. Best we can hope for is robots fighting robots! :)

RE: Oh #$*$!
By Fritzr on 12/30/2009 4:11:26 PM , Rating: 2
Most of Asimov's Robot stories dealt with the gray areas where the the robot was damned if it followed the law & damned if it failed to follow the law.

One of the detective stories featured a catatonic robot...its arm was the murder weapon. For a basic grounding in what can go wrong, check out Asimov's Robot books. If your library doesn't have them, they can get them on interlibrary loan :)

RE: Oh #$*$!
By Lerianis on 12/31/2009 1:02:36 AM , Rating: 2
They are good STORIES, but the fact is that in a case where the Robot was 'damed if they do, damned if they don't'.... the robot could be programmed to just simply SHUT DOWN for good or do nothing.

"I'm an Internet expert too. It's all right to wire the industrial zone only, but there are many problems if other regions of the North are wired." -- North Korean Supreme Commander Kim Jong-il
Related Articles

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki