backtop


Print 115 comment(s) - last by jconan.. on Feb 24 at 12:30 AM


A new Navy-funded report warns against a hasty deployment of war robots, and urges programmers to include ethics subroutines -- a warrior code of sorts. The alternative they say, is the possibility of a robotic atrocity, akin to the Terminator or other sci-fi movies.  (Source: Warner Brothers)
Robots must learn to obey a warrior code, but increasing intelligence may make keeping the robots from turning on their masters increasingly difficult

Robots gone rogue killing their human masters is rich science fiction fodder, but could it become reality?  Some researchers are beginning to ask that question as artificial intelligence advances continue, and the world's high-tech nations begin to deploy war-robots to the battlefront.  Currently, the U.S. armed forces use many robots, but they all ultimately have a human behind the trigger.  However, there are many plans to develop and deploy fully independent solutions as the technology improves.

Some mistakenly believe that such robots would only be able to operate within a defined set of behaviors.  Describes Patrick Lin, the chief compiler of a new U.S. Navy-funded report, "There is a common misconception that robots will do only what we have programmed them to do.  Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person."

The new report points out that the size of artificial intelligence projects will likely make their code impossible to fully analyze and dissect for possible dangers.  With hundreds of programmers working on millions of lines of code for a single war robot, says Dr. Lin, no one has a clear understanding of what going on, at a small scale, across the entire code base.

He says the key to avoiding robotic rebellion is to include "learning" logic which teaches the robot the rights and wrongs of ethical warfare.  This logic would be mixed with traditional rules based programming. 

The new report looks at many issues surrounding the field of killer robots.  In addition to code malfunction, another potential threat would be a terrorist attack which reprogrammed the robots, turning them on their owners.  And one tricky issue discussed is the question of who would take the blame for a robotic atrocity -- the robot, the programmers, the military, or the U.S. President.

The Ethics and Emerging Technology department of California State Polytechnic University created the report of the U.S. Navy's Office of Naval Research.  It warns the Navy about the dangers of premature deployment or complacency on potential issues.  U.S. Congress has currently mandated that by 2010 a "deep strike" unmanned aircraft must be operational, and by 2015 on third of the ground combat vehicles must be unmanned.

The report warns, "A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems . . . there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives."

Simple laws of ethics, such as Isaac Asimov's three laws of robotics, the first of which forbids robots from harming humans, will not be sufficient, say the report's authors.  War robots will have to kill, but they will have to understand the difference between enemies and noncombatants.  Dr. Lin describes this challenge stating, "We are going to need a code.  These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code."

The U.S. Army had a scare earlier this year when a software malfunction caused war robots deployed in the field to aim at friendly targets.  While the humans still had control of the trigger, the incident highlighted the challenges a fully autonomous system would face.  The offending robots were serviced and are still deployed in Iraq.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: What a load of crap
By TSS on 2/17/2009 1:37:30 PM , Rating: 3
the problem isn't in telling the bots what to do. it's telling them what to do, having a malfunction, and them not stopping.

where already telling them to kill other humans, although to us those humans are "the enemy".

this discussion is about giving robots ethics, in other words, allowing them to seperate friend from foe themselves.

which, in my oppinion, is the first step to the terminator universe. hell we try to teach a robot ethics then a human orders him to go kill another human, but not *those* humans.

where in the process does the bot learn right from wrong? and what would the bot percieve as wrong? would a bot operating for nazi germany, percieve it ethically wrong to kill jews and rise up against their masters (which would be a terminator situation).

this debate is far more philosophical then just debating wether the code is *capable* of doing it.

any player of world of warcraft will tell you that eventually, code will start to get a mind of his own. i swear that game just wants me dead at times (out of the blue for no reason pulling an entire FIELD of npc's).


RE: What a load of crap
By rudolphna on 2/17/2009 2:12:45 PM , Rating: 1
Great post, exactly my thoughts. Oh, and on the WoW thing I know exactly what you speak of. Sometimes I think there is a blizz employee sitting at a screen screwing around and doing it on purpose, lol.


RE: What a load of crap
By rykerabel on 2/17/2009 3:02:11 PM , Rating: 2
RE: What a load of crap
By GaryJohnson on 2/17/2009 2:15:24 PM , Rating: 2
You had me up until "world of warcraft".


RE: What a load of crap
By TSS on 2/17/2009 6:21:10 PM , Rating: 2
it's good to know my ideas and observations can be nullified because i happen to enjoy a paticular silly game (which i quit 2 weeks back mind you).

here's why the comparison is valid: the NPC's, or Non Playable Charracters, are completly AI driven. a human told them to patrol that area, and attack anybody within 10 yards of range. atleast at a certain level, but that's no different then a certain level of threat a bot might experience in real life. otherwise, they are completly void of any human interaction.

there's a field of them each watching their own 10 yard space. so imagine my suprise when AI up to 200 yards away starts charging for me out of the blue to kill me.

suppose you have several AI bots patrolling your base (in real life). out of the blue they all attack everybody within sight, which they aren't supposed to.

this is the greatest fear of armed bots, and in WoW i've already seen it happen. the game consists of millions of lines of code, like real life bots, the NPC's have no human controlling them, like real life bots (the ones where discussing here atleast), they have a build-in response to threats, like real life bots, and they will engange if i pose a threat to them, like real life bots.

you might laugh now, because i mention world of warcraft. if this situation actually happens in 10-20 years, i'll laugh my ass off. people might hate me for it, but i'll laugh even harder at it. poetic justice i suppose.

and get my lvl 80 mage to kill the rogue bots, but that's a different discussion.


RE: What a load of crap
By jconan on 2/24/2009 12:30:09 AM , Rating: 2
yea there'll be a lot of friendly fires among allies when autonomous droids are deployed. wonder who'll be under the hotseat when this comes out??


"I'm an Internet expert too. It's all right to wire the industrial zone only, but there are many problems if other regions of the North are wired." -- North Korean Supreme Commander Kim Jong-il

Related Articles
Dawn of the Drones
June 6, 2008, 6:15 PM
War Robots Still in Iraq
April 17, 2008, 10:20 AM
Can Robots Commit War Crimes?
February 29, 2008, 2:37 PM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki