backtop


Print 115 comment(s) - last by jconan.. on Feb 24 at 12:30 AM


A new Navy-funded report warns against a hasty deployment of war robots, and urges programmers to include ethics subroutines -- a warrior code of sorts. The alternative they say, is the possibility of a robotic atrocity, akin to the Terminator or other sci-fi movies.  (Source: Warner Brothers)
Robots must learn to obey a warrior code, but increasing intelligence may make keeping the robots from turning on their masters increasingly difficult

Robots gone rogue killing their human masters is rich science fiction fodder, but could it become reality?  Some researchers are beginning to ask that question as artificial intelligence advances continue, and the world's high-tech nations begin to deploy war-robots to the battlefront.  Currently, the U.S. armed forces use many robots, but they all ultimately have a human behind the trigger.  However, there are many plans to develop and deploy fully independent solutions as the technology improves.

Some mistakenly believe that such robots would only be able to operate within a defined set of behaviors.  Describes Patrick Lin, the chief compiler of a new U.S. Navy-funded report, "There is a common misconception that robots will do only what we have programmed them to do.  Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person."

The new report points out that the size of artificial intelligence projects will likely make their code impossible to fully analyze and dissect for possible dangers.  With hundreds of programmers working on millions of lines of code for a single war robot, says Dr. Lin, no one has a clear understanding of what going on, at a small scale, across the entire code base.

He says the key to avoiding robotic rebellion is to include "learning" logic which teaches the robot the rights and wrongs of ethical warfare.  This logic would be mixed with traditional rules based programming. 

The new report looks at many issues surrounding the field of killer robots.  In addition to code malfunction, another potential threat would be a terrorist attack which reprogrammed the robots, turning them on their owners.  And one tricky issue discussed is the question of who would take the blame for a robotic atrocity -- the robot, the programmers, the military, or the U.S. President.

The Ethics and Emerging Technology department of California State Polytechnic University created the report of the U.S. Navy's Office of Naval Research.  It warns the Navy about the dangers of premature deployment or complacency on potential issues.  U.S. Congress has currently mandated that by 2010 a "deep strike" unmanned aircraft must be operational, and by 2015 on third of the ground combat vehicles must be unmanned.

The report warns, "A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems . . . there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives."

Simple laws of ethics, such as Isaac Asimov's three laws of robotics, the first of which forbids robots from harming humans, will not be sufficient, say the report's authors.  War robots will have to kill, but they will have to understand the difference between enemies and noncombatants.  Dr. Lin describes this challenge stating, "We are going to need a code.  These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code."

The U.S. Army had a scare earlier this year when a software malfunction caused war robots deployed in the field to aim at friendly targets.  While the humans still had control of the trigger, the incident highlighted the challenges a fully autonomous system would face.  The offending robots were serviced and are still deployed in Iraq.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: Good Freakin Luck
By Clienthes on 2/18/2009 5:27:35 AM , Rating: 2
quote:
Right. "Ethical Warfare"? That's the best oxymoron I've ever heard.


War is not always wrong or unethical. It's just always regrettable.

To answer your questions:
1. Luckily, not you.

2. Not what the article is concerned with, but still...Hopefully the designers would have the sense to teach it only sensible, practical ethics. Robots that can fantasize about utopia would be useless.

3. Really? People still bring this kind of thing up? What world do you live in? As long as bad people do bad things and we don't all get along, someone is going to have to make difficult decisions about who lives, who dies, and what is an acceptable loss. The kind of BS you posted really minimizes the trauma that THOSE people go through to keep the impact of military operations to a minimum. Do you want to make those decisions? No? Don't have the stomach for it? Put yourself in their shoes before you get self-righteous. Because someone HAS to do it. Imagine the world if they didn't.

If you aren't willing to kill, you'd better be willing to see everything you care about die.


RE: Good Freakin Luck
By Rugar on 2/18/2009 10:28:05 AM , Rating: 2
Hmmm... To respond to your answers:

1. You have no idea about how I would define ethics for an autonomous robot capable of lethal force, so your answer demonstrates my point. You assume that you and I would disagree on ethics because, more often than not you would be right. The beauty of having humans at the controls is that every single one is an individual with their own background to help them determine right from wrong. While you may have occasional atavistic individuals making socially unacceptable choices, they are just individuals. Fielding a force of robots all programmed by one person or group of persons changes the "balance of power" in ethics by slanting decisions one way.

2. Practical ethics? Interesting. I'm fairly sure that ethics are based on absolutes. It is the human based decisions of when we must make decisions for the "greater good" that contravene our ethical training which are important. In short, humans have the capacity to decide when they must do things they are know are "wrong" in order to accomplish a greater good. I'm assuming by your reactions that you are in some way connected to the military. If so, how often have you sat through training sessions where you discuss the difficult decision of when to disobey orders? I've had to sit through QUITE a few of those. The reason we do it is because it is important that troops understand that at some point, the cost in broken ethics is worse than the damage that may be caused by refusing to obey orders.

3. People bring it up quite often. How many times did you hear it during the course of the elections? I heard some version of this A LOT. Your tirade is interesting, mostly because while you seem to think we disagree I think we mostly agree. PEOPLE have to make the decisions of when to contravene their ethics. People who are capable of understanding that their choices have consequences and the moral strength to make those decisions knowing that they will suffer from the guilt for years afterword even though they know they did the right thing.

Warfare is unethical in most of the major societal groups around the world. And yet, we recognize that at times it is necessary to go to war in order to defend some greater ideal. That doesn't change the fact that war is an evil which should be avoided at all costs.

I know, I know. Philosophy on a tech site. And long-winded philosophy at that. Sorry about that...


RE: Good Freakin Luck
By totenkopf on 2/18/2009 9:04:25 PM , Rating: 2
Good luck figuring out the ethics thing. Business ethics aside (which may be a better model for warfare ethics), philosophical ethics are definitely not absolute; though I would love to see the Categorical Imperative in C++. Humans have been trying to wrap their minds around ethics and morality since the beginning of time, there is no one/simple answer, and there is always an exception to every rule (or is there?!).

Frankly, if war is sadistic and horrible now, imagine if we take the last element of humanity out? It might be better in some cases, but in most cases I think it would get ugly.


"Let's face it, we're not changing the world. We're building a product that helps people buy more crap - and watch porn." -- Seagate CEO Bill Watkins

Related Articles
Dawn of the Drones
June 6, 2008, 6:15 PM
War Robots Still in Iraq
April 17, 2008, 10:20 AM
Can Robots Commit War Crimes?
February 29, 2008, 2:37 PM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki