Print 115 comment(s) - last by jconan.. on Feb 24 at 12:30 AM

A new Navy-funded report warns against a hasty deployment of war robots, and urges programmers to include ethics subroutines -- a warrior code of sorts. The alternative they say, is the possibility of a robotic atrocity, akin to the Terminator or other sci-fi movies.  (Source: Warner Brothers)
Robots must learn to obey a warrior code, but increasing intelligence may make keeping the robots from turning on their masters increasingly difficult

Robots gone rogue killing their human masters is rich science fiction fodder, but could it become reality?  Some researchers are beginning to ask that question as artificial intelligence advances continue, and the world's high-tech nations begin to deploy war-robots to the battlefront.  Currently, the U.S. armed forces use many robots, but they all ultimately have a human behind the trigger.  However, there are many plans to develop and deploy fully independent solutions as the technology improves.

Some mistakenly believe that such robots would only be able to operate within a defined set of behaviors.  Describes Patrick Lin, the chief compiler of a new U.S. Navy-funded report, "There is a common misconception that robots will do only what we have programmed them to do.  Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person."

The new report points out that the size of artificial intelligence projects will likely make their code impossible to fully analyze and dissect for possible dangers.  With hundreds of programmers working on millions of lines of code for a single war robot, says Dr. Lin, no one has a clear understanding of what going on, at a small scale, across the entire code base.

He says the key to avoiding robotic rebellion is to include "learning" logic which teaches the robot the rights and wrongs of ethical warfare.  This logic would be mixed with traditional rules based programming. 

The new report looks at many issues surrounding the field of killer robots.  In addition to code malfunction, another potential threat would be a terrorist attack which reprogrammed the robots, turning them on their owners.  And one tricky issue discussed is the question of who would take the blame for a robotic atrocity -- the robot, the programmers, the military, or the U.S. President.

The Ethics and Emerging Technology department of California State Polytechnic University created the report of the U.S. Navy's Office of Naval Research.  It warns the Navy about the dangers of premature deployment or complacency on potential issues.  U.S. Congress has currently mandated that by 2010 a "deep strike" unmanned aircraft must be operational, and by 2015 on third of the ground combat vehicles must be unmanned.

The report warns, "A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems . . . there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives."

Simple laws of ethics, such as Isaac Asimov's three laws of robotics, the first of which forbids robots from harming humans, will not be sufficient, say the report's authors.  War robots will have to kill, but they will have to understand the difference between enemies and noncombatants.  Dr. Lin describes this challenge stating, "We are going to need a code.  These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code."

The U.S. Army had a scare earlier this year when a software malfunction caused war robots deployed in the field to aim at friendly targets.  While the humans still had control of the trigger, the incident highlighted the challenges a fully autonomous system would face.  The offending robots were serviced and are still deployed in Iraq.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

Bug or feature?
By poundsmack on 2/17/2009 2:20:20 PM , Rating: 2
This is actually an interesting discussion. AI is a lot farther along than most people think. Just because technologies like quantum computing, laser hard drives, nano technology, etc… are not common place in the public sector that doesn’t mean they aren’t out there. Stuff like that is out there and most people don’t know it (example of tech I bet no one else knew existed till I posted this: )

I personally don’t want robots doing the fighting for us, if its worth fighting for (diplomacy failed), then its worth dying for. Now I also don’t think that many things are worth dying for, and I would rather people get to a place where they learn “ethical code of war” than teaching a robot. The real task at hand should be teaching better people, not building “smarter” robots. Reason and Logic in people should be improved/encouraged, not just in robots.

Either way, in the end it all comes down to EMP. If we don’t need to communicate with the robots or give orders, and they are completely self contained then it isn’t an issue (unlikely scenario, when was the last time in war that “everything went smoothly” and there was no need to change tactics or come up with a different plan quickly?). BUT, since we will have to send signals and transitions (Patch Tuesday anyone?) to the robots that leaves them susceptible to Electro Magnetic Pulse and or communication jamming. Anything at isn’t fully, and I do mean FULLY shielded (outer shell and inner wiring/circuitry) would be susceptible.

So in the end unless we can make robots sooooooo smart that we don’t EVER need to give them orders none of this matters anyway. Though the good news is, like NASA, these research projects from the government give us a lot of great stuff that trickles down to the consumers. and this is by no means a complete list there are thousands!

So continue your research and theories, but I hope to never see the kind of AI they want implemented in my life time.

"It's okay. The scenarios aren't that clear. But it's good looking. [Steve Jobs] does good design, and [the iPad] is absolutely a good example of that." -- Bill Gates on the Apple iPad
Related Articles
Dawn of the Drones
June 6, 2008, 6:15 PM
War Robots Still in Iraq
April 17, 2008, 10:20 AM
Can Robots Commit War Crimes?
February 29, 2008, 2:37 PM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki