backtop


Print 115 comment(s) - last by jconan.. on Feb 24 at 12:30 AM


A new Navy-funded report warns against a hasty deployment of war robots, and urges programmers to include ethics subroutines -- a warrior code of sorts. The alternative they say, is the possibility of a robotic atrocity, akin to the Terminator or other sci-fi movies.  (Source: Warner Brothers)
Robots must learn to obey a warrior code, but increasing intelligence may make keeping the robots from turning on their masters increasingly difficult

Robots gone rogue killing their human masters is rich science fiction fodder, but could it become reality?  Some researchers are beginning to ask that question as artificial intelligence advances continue, and the world's high-tech nations begin to deploy war-robots to the battlefront.  Currently, the U.S. armed forces use many robots, but they all ultimately have a human behind the trigger.  However, there are many plans to develop and deploy fully independent solutions as the technology improves.

Some mistakenly believe that such robots would only be able to operate within a defined set of behaviors.  Describes Patrick Lin, the chief compiler of a new U.S. Navy-funded report, "There is a common misconception that robots will do only what we have programmed them to do.  Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person."

The new report points out that the size of artificial intelligence projects will likely make their code impossible to fully analyze and dissect for possible dangers.  With hundreds of programmers working on millions of lines of code for a single war robot, says Dr. Lin, no one has a clear understanding of what going on, at a small scale, across the entire code base.

He says the key to avoiding robotic rebellion is to include "learning" logic which teaches the robot the rights and wrongs of ethical warfare.  This logic would be mixed with traditional rules based programming. 

The new report looks at many issues surrounding the field of killer robots.  In addition to code malfunction, another potential threat would be a terrorist attack which reprogrammed the robots, turning them on their owners.  And one tricky issue discussed is the question of who would take the blame for a robotic atrocity -- the robot, the programmers, the military, or the U.S. President.

The Ethics and Emerging Technology department of California State Polytechnic University created the report of the U.S. Navy's Office of Naval Research.  It warns the Navy about the dangers of premature deployment or complacency on potential issues.  U.S. Congress has currently mandated that by 2010 a "deep strike" unmanned aircraft must be operational, and by 2015 on third of the ground combat vehicles must be unmanned.

The report warns, "A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems . . . there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives."

Simple laws of ethics, such as Isaac Asimov's three laws of robotics, the first of which forbids robots from harming humans, will not be sufficient, say the report's authors.  War robots will have to kill, but they will have to understand the difference between enemies and noncombatants.  Dr. Lin describes this challenge stating, "We are going to need a code.  These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code."

The U.S. Army had a scare earlier this year when a software malfunction caused war robots deployed in the field to aim at friendly targets.  While the humans still had control of the trigger, the incident highlighted the challenges a fully autonomous system would face.  The offending robots were serviced and are still deployed in Iraq.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Stoopid
By lebe0024 on 2/17/2009 10:25:31 AM , Rating: 2
FTFA: "There is a common misconception that robots will do only what we have programmed them to do."

Show me any program that doesn't do what it was programmed to do.




RE: Stoopid
By Moohbear on 2/17/2009 11:10:51 AM , Rating: 4
Well, there's what the program is intended to do and there's what it actually does. And then, there are just plain old bugs... That leaves a lot of wiggle room for unexpected behavior, which is a little disturbing when the said computer is wielding a gun.


RE: Stoopid
By callmeroy on 2/17/2009 11:26:33 AM , Rating: 3
Are you serious dude?

There are TONS of programs that do OTHER than what the programmer designed, wrote or intended the program to do. It's called having bugs in the code.

My counter question is name me ONE piece of software (reasonably speaking here ...don't tell me your 9th grade VB code for "Hello World" is bug free) of significant popularity in the work place (government or private take your pick) that doesn't have at least one bug that was unexpected.


RE: Stoopid
By Divide Overflow on 2/17/2009 1:34:46 PM , Rating: 2
Intentions aren't worth spit. If there's a bug in the code, the machine is still executing it's programming perfectly.


RE: Stoopid
By mindless1 on 2/17/2009 9:54:06 PM , Rating: 2
If the hardware is also perfect, and in perfect working condition. In unknown state of combat damage.


RE: Stoopid
By Fritzr on 2/18/2009 12:44:10 AM , Rating: 2
Let's put the bug in the target selection code, place the fully armed bot in downtown Wash. DC and set it loose.

A bug can sometimes be an unanticipated feature :P

Remember the point of the article is that when an autonomous weapons system is placed in the field there is a danger of the weapon system engaging targets that it is supposed to protect. All it takes is one or two mistyped characters in the code. You can add safeguards by requiring multiple sources of target verification before allowing damage to the target.

Regardless of safeguards and cutouts, the system will be REQUIRED to damage targets designated unfriendly. What makes a target unfriendly? Weapon pointed at robot? Supporting troops really need to watch the way they hold their rifles. Wrong uniform? So the enemy changes into civvies. Carrying a weapon & not friendly uniform? Put on an outfit that looks friendly, walk up to robot and apply shaped charge with delay fuse and walk away.

These and many other real life situations will be faced by a real combat robot. Now will you guarantee all the code involved in decision making to be error free? Look at medical equipment today. Due to the danger to patients the code in medical devices is heavily bugchecked and tested. In spite of this there have been devices, including a robotic irradiation machine deployed with deadly bugs.

Yes it is science fiction. NOT science fantasy. Real hard science fiction is building a story around reasonable extrapolations of what can be done if selected advances and/or discoveries become reality. Geo-sync satellite? Arthur C. Clarke. Water bed? Robert A. Heinlein. Many devices in use today, including cell phones, personal computers, pocket music players, internet, video phone etc. were "invented" by science fiction authors who then wrote a story that included the idea that the fanciful device was an everday item.


RE: Stoopid
By Moishe on 2/17/2009 4:05:48 PM , Rating: 2
A bug is unintended but it is still the result of specific programming. No "bug" is going to make the difference between insane and intelligent killer robot and fluffy the clownbot on a leash.

Bugs can be weeded out and fixed before production if given enough time and money. A self-aware robot is not the result of a bug.


RE: Stoopid
By mindless1 on 2/17/2009 9:57:06 PM , Rating: 2
When has anything been given enough time and money to reach this nirvana?

A "bug" can mean many things, like a security flaw the enemy found that we didn't, and you can bet they'll be looking for some.


RE: Stoopid
By MozeeToby on 2/17/2009 2:16:26 PM , Rating: 2
Chess AI

Granted, you could argue that the program was written to play chess but I would argue that the program plays chess an order of magnitude better than the programmer.

The programmer didn't program every specific situation into the program nor program every specific strategy. There are loads of programs with emergent behavior, behavior that wasn't coded for but is an unexpected result of the code. The situation is very common with learning algorithms and can often produce very unusual, unexpected behaviors.


RE: Stoopid
By Yames on 2/17/2009 6:04:12 PM , Rating: 2
Chess AI will "play" better than its programmer, but it "thinks" differently. The basics of the algorithm are not that complicated. The AI plays ahead as many different moves as it can. Weights are assigned to the outcomes (the heart of the algorithm), and the outcome with the most weight is chosen. Of course weaknesses are built in so we are not obliterated by good algorithms. Only Grand Masters stand a change against these when they are not restricted. Perhaps the unexpected behavior you are thinking of is expected as a function of the restrictions.

Regardless, the Chess AI has a very limited scope and a single programmer can/should understand it in its entirety. AI for Warbots is another story.


RE: Stoopid
By xRyanCat on 2/17/2009 7:26:25 PM , Rating: 2
The Chess AI is only better because it's mathematically faster... Just as a computer would be better at multiplying 125 * 438. The programmer could do it, it would just take longer.

And unless programmed to do otherwise the AI will always output the same moves based on the situations it encounters.

Of course computers can do many things that humans can't, but they can't do anything that we can't envision or that we haven't programmed them to do. The Chess "AI" is more Superficial then Artificial. It doesn't "Think" and make choices that deviate from its programming path.


RE: Stoopid
By Larrymon2000 on 2/23/2009 1:27:46 AM , Rating: 2
You're kidding right? You know race conditions exist right? You know that in highly multi-threaded applications, the outcome is almost impossible to determine if there are enough concurrent threads running asynchronously on the same shared pool of data. Of course, programmers ALWAYS work to avoid this type of thing, so it's uncommon. But you could implement race conditions in AI for instance. Create a circuit. Under different circumstances, different paths drive the signal at different times.


RE: Stoopid
By Larrymon2000 on 2/23/2009 1:31:13 AM , Rating: 2
But about the whole chess thing, that's just silly. The programmer who developed the algorithm for the chess game will inherently understand what moves to make so that the computer CAN'T produce the right decision for it. It's just a set of analyses based on the situation and simulating the outcomes and finding the optimal ones. Just implementation of combinatorics and optimization. Why do you think Blue Gene is good at it? Because it can go through an infinite number of cases and deduce the best outcome. But it's not learning. It's not developing new strategies and fundamentally changing its algorithms using run-time reflection.


RE: Stoopid
By SiliconAddict on 2/20/2009 12:30:36 AM , Rating: 2
Windows ME.


"I mean, if you wanna break down someone's door, why don't you start with AT&T, for God sakes? They make your amazing phone unusable as a phone!" -- Jon Stewart on Apple and the iPhone

Related Articles
Dawn of the Drones
June 6, 2008, 6:15 PM
War Robots Still in Iraq
April 17, 2008, 10:20 AM
Can Robots Commit War Crimes?
February 29, 2008, 2:37 PM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki