backtop


Print 115 comment(s) - last by jconan.. on Feb 24 at 12:30 AM


A new Navy-funded report warns against a hasty deployment of war robots, and urges programmers to include ethics subroutines -- a warrior code of sorts. The alternative they say, is the possibility of a robotic atrocity, akin to the Terminator or other sci-fi movies.  (Source: Warner Brothers)
Robots must learn to obey a warrior code, but increasing intelligence may make keeping the robots from turning on their masters increasingly difficult

Robots gone rogue killing their human masters is rich science fiction fodder, but could it become reality?  Some researchers are beginning to ask that question as artificial intelligence advances continue, and the world's high-tech nations begin to deploy war-robots to the battlefront.  Currently, the U.S. armed forces use many robots, but they all ultimately have a human behind the trigger.  However, there are many plans to develop and deploy fully independent solutions as the technology improves.

Some mistakenly believe that such robots would only be able to operate within a defined set of behaviors.  Describes Patrick Lin, the chief compiler of a new U.S. Navy-funded report, "There is a common misconception that robots will do only what we have programmed them to do.  Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person."

The new report points out that the size of artificial intelligence projects will likely make their code impossible to fully analyze and dissect for possible dangers.  With hundreds of programmers working on millions of lines of code for a single war robot, says Dr. Lin, no one has a clear understanding of what going on, at a small scale, across the entire code base.

He says the key to avoiding robotic rebellion is to include "learning" logic which teaches the robot the rights and wrongs of ethical warfare.  This logic would be mixed with traditional rules based programming. 

The new report looks at many issues surrounding the field of killer robots.  In addition to code malfunction, another potential threat would be a terrorist attack which reprogrammed the robots, turning them on their owners.  And one tricky issue discussed is the question of who would take the blame for a robotic atrocity -- the robot, the programmers, the military, or the U.S. President.

The Ethics and Emerging Technology department of California State Polytechnic University created the report of the U.S. Navy's Office of Naval Research.  It warns the Navy about the dangers of premature deployment or complacency on potential issues.  U.S. Congress has currently mandated that by 2010 a "deep strike" unmanned aircraft must be operational, and by 2015 on third of the ground combat vehicles must be unmanned.

The report warns, "A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems . . . there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives."

Simple laws of ethics, such as Isaac Asimov's three laws of robotics, the first of which forbids robots from harming humans, will not be sufficient, say the report's authors.  War robots will have to kill, but they will have to understand the difference between enemies and noncombatants.  Dr. Lin describes this challenge stating, "We are going to need a code.  These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code."

The U.S. Army had a scare earlier this year when a software malfunction caused war robots deployed in the field to aim at friendly targets.  While the humans still had control of the trigger, the incident highlighted the challenges a fully autonomous system would face.  The offending robots were serviced and are still deployed in Iraq.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: What a load of crap
By MozeeToby on 2/17/2009 1:51:36 PM , Rating: 2
Very, very true. Our military robots aren't just going to 'wake up' one day and decide to start killing everyone.

I could imagine, though, writing a learning algorithm to help the robot identify threats. If the robots could communicate their improvements to the algorithm (based on threats detected and destroyed for instance) it would only take one to robot to learn the wrong identifiers to bring down the whole network of robots.

Of course, even so I would think direct commands would still work. As long as there is a low level command to shut down the system (a command that doesn't go through the threat detection system) there shouldn't be a problem.


RE: What a load of crap
By cfaalm on 2/17/2009 5:01:05 PM , Rating: 2
Since this is so close to SF: If MS can build a firewall, then the robot could do that too, at some point in time ;-)

If all depends on code, how do you prevent the robot or your enemy from writing/adapting code to prevent a remote shut down. Could your enemy hack into the robot's system? So you want it to be open to you, but sealed tight to your enemies and the robot itself. To an intelligent being, that would feel like a mind prison: "We will tell you what to do, don't make up your own mind."


RE: What a load of crap
By MozeeToby on 2/17/2009 5:19:18 PM , Rating: 2
I didn't mean for my post to come of as Sci-Fi, let me explain more thoroughly my thoughts. I can imagine writing software that allows a swarm of robots to communicate with each other such that each robot can send information about was happening around them when they are destroyed.

This information could be used to build a set of rules about what is and isn't a dangerous situation. If you allow the robots a finite but limited list of behaviors (flee, attack, take cover, etc) they could try new things depending on the situation, record and/or broadcast the results of the engagement. Things that work get used more often, things that don't work get used less often.

Now all it takes is one bug in the program for a robot to identify civilians as enemies. Since every time the robot attacks an unarmed civilian it will probably win, this behavior could quickly become dominant, spreading like a virus to the other robots in the group.

What won't happen though is the robot changing it's own code or suddenly learning subjects that it doesn't have algorithms to learn. The robot won't re-write it's basic command system because the command system isn't designed to learn.

Basically, the robot is closed out of the command system because there isn't an algorithm that allows that behavior to be edited. The enemy is closed out because the commands would be sent via encrypted signals (no way, short of treason will those will be broken).


RE: What a load of crap
By croc on 2/17/2009 5:42:34 PM , Rating: 2
This whole topic gives me a new meaning for 'war-driving'...


RE: What a load of crap
By mindless1 on 2/17/2009 9:47:26 PM , Rating: 2
No way short of treason... because when you're seconds away from being tortured to death by the enemy, or THEIR robots, it would be unheard of to give up any info to save your own life, your main concern isn't that the other captives will give up the info anyway, it's that if you survive you might be found guilty of treason later (if the robots don't kill everyone anyway)?


RE: What a load of crap
By mindless1 on 2/17/2009 9:44:06 PM , Rating: 2
If they do what is suggested, yes, yes they will.

What is ultimately the most fair and ethical thing for the war robot to do? Kill all soldiers, anyone who poses any kind of potential threat to others by supporting, carrying, or being in any way involved with weapons.

The only /logical/ thing a robot could do is exterminate all who seek to engage in war, then keep warlike movements from gaining sufficient strength in the future.

If there is a low level command to shut down the system, aren't we opening up a huge security hole for the enemy robots to capture and exploit? Use very deep encryption perhaps? Unique identifiers and authentication keys adding to the complexity of the system so even fewer of those deploying, using, and designing them know what to do when things do wrong?

After all, if there's one thing we always have plenty of in a war zone, it's robotic engineers that can take out haywire killer robots.


RE: What a load of crap
By MrPoletski on 2/18/2009 8:45:53 AM , Rating: 2
cyberdyne systems were infected with a virus that game their robots autonomous thought.

While a virus giving robots autonomous thought is fanciful, the idea of these military robots contracting a virus is absolutely not.


"Death Is Very Likely The Single Best Invention Of Life" -- Steve Jobs

Related Articles
Dawn of the Drones
June 6, 2008, 6:15 PM
War Robots Still in Iraq
April 17, 2008, 10:20 AM
Can Robots Commit War Crimes?
February 29, 2008, 2:37 PM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki