backtop


Print 115 comment(s) - last by jconan.. on Feb 24 at 12:30 AM


A new Navy-funded report warns against a hasty deployment of war robots, and urges programmers to include ethics subroutines -- a warrior code of sorts. The alternative they say, is the possibility of a robotic atrocity, akin to the Terminator or other sci-fi movies.  (Source: Warner Brothers)
Robots must learn to obey a warrior code, but increasing intelligence may make keeping the robots from turning on their masters increasingly difficult

Robots gone rogue killing their human masters is rich science fiction fodder, but could it become reality?  Some researchers are beginning to ask that question as artificial intelligence advances continue, and the world's high-tech nations begin to deploy war-robots to the battlefront.  Currently, the U.S. armed forces use many robots, but they all ultimately have a human behind the trigger.  However, there are many plans to develop and deploy fully independent solutions as the technology improves.

Some mistakenly believe that such robots would only be able to operate within a defined set of behaviors.  Describes Patrick Lin, the chief compiler of a new U.S. Navy-funded report, "There is a common misconception that robots will do only what we have programmed them to do.  Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person."

The new report points out that the size of artificial intelligence projects will likely make their code impossible to fully analyze and dissect for possible dangers.  With hundreds of programmers working on millions of lines of code for a single war robot, says Dr. Lin, no one has a clear understanding of what going on, at a small scale, across the entire code base.

He says the key to avoiding robotic rebellion is to include "learning" logic which teaches the robot the rights and wrongs of ethical warfare.  This logic would be mixed with traditional rules based programming. 

The new report looks at many issues surrounding the field of killer robots.  In addition to code malfunction, another potential threat would be a terrorist attack which reprogrammed the robots, turning them on their owners.  And one tricky issue discussed is the question of who would take the blame for a robotic atrocity -- the robot, the programmers, the military, or the U.S. President.

The Ethics and Emerging Technology department of California State Polytechnic University created the report of the U.S. Navy's Office of Naval Research.  It warns the Navy about the dangers of premature deployment or complacency on potential issues.  U.S. Congress has currently mandated that by 2010 a "deep strike" unmanned aircraft must be operational, and by 2015 on third of the ground combat vehicles must be unmanned.

The report warns, "A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems . . . there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives."

Simple laws of ethics, such as Isaac Asimov's three laws of robotics, the first of which forbids robots from harming humans, will not be sufficient, say the report's authors.  War robots will have to kill, but they will have to understand the difference between enemies and noncombatants.  Dr. Lin describes this challenge stating, "We are going to need a code.  These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code."

The U.S. Army had a scare earlier this year when a software malfunction caused war robots deployed in the field to aim at friendly targets.  While the humans still had control of the trigger, the incident highlighted the challenges a fully autonomous system would face.  The offending robots were serviced and are still deployed in Iraq.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

So...keep the human behind the trigger.
By Schrag4 on 2/17/2009 10:25:09 AM , Rating: 2
What's wrong with having humans still ultimately decide who lives and dies, but let the bots be in the line of fire? We don't have to give these bots the responsibility of deciding whether or not to pull the trigger.

Shoot, we could even develop a system where us humans basically tag 20 people out of a group of 1000 to kill and the bots, on command, kill all 20 nearly simultaneously. The bots wouldn't have to decide to pull the trigger on any of those people, but they'd still be able to get the job done in an extremely efficient manner. Why do they have to have judgement built in?

I suppose even the system I propose is a slippery slope...




RE: So...keep the human behind the trigger.
By Steve1981 on 2/17/2009 12:17:23 PM , Rating: 2
quote:
What's wrong with having humans still ultimately decide who lives and dies, but let the bots be in the line of fire?


Because in that situation you have to be able to communicate effectively with the robots. This communication can be jammed rendering your robots worthless, or worse, it can be hacked, and your robots can turn against you.

Against the advanced forces of the Taliban, it isn't a big deal. In a fight with someone a little more sophisticated, it can pose a problem.


RE: So...keep the human behind the trigger.
By Schrag4 on 2/17/2009 1:07:24 PM , Rating: 2
I agree, the main concern would be preventing the enemy from taking control of the robots. But I still think humans should be pulling the tigger here. Not only that, but the rewards of using these bots would likely far outweigh the risk of them being taken over. Once one is compromised, shut them all down until the proper modifications can be made. Then use them again until one gets compromosed, shut them down, etc etc. You could use these with humans controlling them without being stupid and reckless about it.


RE: So...keep the human behind the trigger.
By mindless1 on 2/17/2009 10:04:09 PM , Rating: 2
"OK mr enemy, don't shoot, I'll just be collecting my robots and will meet you back here tomorrow after lunch".


RE: So...keep the human behind the trigger.
By Schrag4 on 2/18/2009 9:55:13 AM , Rating: 2
Obviously you would DESTROY the handful that you had in combat at the moment and STOP DEPLOYING them until you got the issue resolved. It's not that complicated. Collect them? Again, don't be stupid and reckless by deploying all at once, only a handful at a time. Sheesh...


By mindless1 on 2/18/2009 8:03:12 PM , Rating: 2
You can't just go and destroy enough robots to have been effective when they cost millions each, then be criticized by the tree-huggers for littering.

Plus, if you blow up a group of self-learning robots, who do you think the rest of the robots will see as the real enemy?


By Fritzr on 2/18/2009 11:48:49 AM , Rating: 2
That's the system they have today. The only thing you have added is the ability to select a target and delay execution.

You hear about the airborn drones today. There are also ground drones that are small RC or cabled robots that can carry weapons.


“And I don't know why [Apple is] acting like it’s superior. I don't even get it. What are they trying to say?” -- Bill Gates on the Mac ads

Related Articles
Dawn of the Drones
June 6, 2008, 6:15 PM
War Robots Still in Iraq
April 17, 2008, 10:20 AM
Can Robots Commit War Crimes?
February 29, 2008, 2:37 PM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki