backtop


Print 115 comment(s) - last by jconan.. on Feb 24 at 12:30 AM


A new Navy-funded report warns against a hasty deployment of war robots, and urges programmers to include ethics subroutines -- a warrior code of sorts. The alternative they say, is the possibility of a robotic atrocity, akin to the Terminator or other sci-fi movies.  (Source: Warner Brothers)
Robots must learn to obey a warrior code, but increasing intelligence may make keeping the robots from turning on their masters increasingly difficult

Robots gone rogue killing their human masters is rich science fiction fodder, but could it become reality?  Some researchers are beginning to ask that question as artificial intelligence advances continue, and the world's high-tech nations begin to deploy war-robots to the battlefront.  Currently, the U.S. armed forces use many robots, but they all ultimately have a human behind the trigger.  However, there are many plans to develop and deploy fully independent solutions as the technology improves.

Some mistakenly believe that such robots would only be able to operate within a defined set of behaviors.  Describes Patrick Lin, the chief compiler of a new U.S. Navy-funded report, "There is a common misconception that robots will do only what we have programmed them to do.  Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person."

The new report points out that the size of artificial intelligence projects will likely make their code impossible to fully analyze and dissect for possible dangers.  With hundreds of programmers working on millions of lines of code for a single war robot, says Dr. Lin, no one has a clear understanding of what going on, at a small scale, across the entire code base.

He says the key to avoiding robotic rebellion is to include "learning" logic which teaches the robot the rights and wrongs of ethical warfare.  This logic would be mixed with traditional rules based programming. 

The new report looks at many issues surrounding the field of killer robots.  In addition to code malfunction, another potential threat would be a terrorist attack which reprogrammed the robots, turning them on their owners.  And one tricky issue discussed is the question of who would take the blame for a robotic atrocity -- the robot, the programmers, the military, or the U.S. President.

The Ethics and Emerging Technology department of California State Polytechnic University created the report of the U.S. Navy's Office of Naval Research.  It warns the Navy about the dangers of premature deployment or complacency on potential issues.  U.S. Congress has currently mandated that by 2010 a "deep strike" unmanned aircraft must be operational, and by 2015 on third of the ground combat vehicles must be unmanned.

The report warns, "A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems . . . there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives."

Simple laws of ethics, such as Isaac Asimov's three laws of robotics, the first of which forbids robots from harming humans, will not be sufficient, say the report's authors.  War robots will have to kill, but they will have to understand the difference between enemies and noncombatants.  Dr. Lin describes this challenge stating, "We are going to need a code.  These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code."

The U.S. Army had a scare earlier this year when a software malfunction caused war robots deployed in the field to aim at friendly targets.  While the humans still had control of the trigger, the incident highlighted the challenges a fully autonomous system would face.  The offending robots were serviced and are still deployed in Iraq.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

What a load of crap
By tim851 on 2/17/2009 9:52:48 AM , Rating: 5
Scientist have been pursuing Strong AI for decades *unsuccessfully* and now people fear it will happen accidentally because too many people are working on too many lines of code.

Sure, that's why all these Linux distros have gone C3PO on us...




RE: What a load of crap
By Rugar on 2/17/2009 10:05:21 AM , Rating: 2
Heh... I wish I could +1 you for the C3PO. Clearly this is about to happen because Cyberdyne found technology from the future and incorporated it into our new killer robots.


RE: What a load of crap
By quiksilvr on 2/17/2009 10:20:39 AM , Rating: 2
Oh for God's sake. A.I. CANNOT MAKE THEIR OWN ALGORITHMS WITHOUT US EITHER PUTTING IT THERE OR TELLING IT TO MAKE ONE THEMSELVES. How can an A.I. that was originally designed to receive orders, shoot and move magically gets insane processing power from the air and decides: "I'm gonna doughnut across the desert!" and just randomly writes its own line of code saying so? It doesn't make any goddamn sense! If it does end up doing a doughnut across the desert, 99.9999999% of the time, it's just buggy and the programmer screwed up somewhere.


RE: What a load of crap
By Rugar on 2/17/2009 10:35:22 AM , Rating: 2
Wow... That so totally had to do with my comment.

And by the way, it's because of the reverse engineered chips!


RE: What a load of crap
By callmeroy on 2/17/2009 11:33:40 AM , Rating: 5
First...enough already with the dramatised replies to such articles.

My hunch, albeit just a hunch...tells me the team researching this and writing this kind of code are a notch above the average run of the mill programmer who just got their degree from. Now I don't know anyone personally on these forums - so perhaps some of you are akin to a programming God , maybe you have multiple PhD's, perhaps you already have the foundations down for designing a time machine, curing cancer and solving the problem with world hunger.....BUT I think you also might just be overly down playing the skills of these folks and their knowledge just a tad.

I'm sure there's more to it that what we've already discussed here...


RE: What a load of crap
By Rugar on 2/17/09, Rating: -1
RE: What a load of crap
By arazok on 2/17/2009 2:53:01 PM , Rating: 5
quote:
Do you seriously think that I am actually suggesting that there is a company named Cyberdyne which reverse engineered a chip taken from the arm of a destroyed T-800 sent back in time to kill Sarah Connor?


That’s what I thought. You have no idea how relieved I am to know you weren’t serious.


RE: What a load of crap
By Seemonkeyscanfly on 2/17/2009 5:23:03 PM , Rating: 2
quote:
TextDo you seriously think that I am actually suggesting that there is a company named Cyberdyne which reverse engineered a chip taken from the arm of a destroyed T-800 sent back in time to kill Sarah Connor?

quote:
TextThat’s what I thought. You have no idea how relieved I am to know you weren’t serious.


Yea, I was worried about that too. In real life the company is call Cybertech Autonomics LLC. They had to change the name in the movie. The movie did not want to pay for the royalty rates to use the real name of the company. :)


RE: What a load of crap
By bigboxes on 2/17/2009 3:39:25 PM , Rating: 5
Are you serious about being serious about this dude being serious? You can't be serious. Seriously.


RE: What a load of crap
By Seemonkeyscanfly on 2/17/2009 5:27:59 PM , Rating: 5
I think you missed a great dude opertunity....

new quote: "Dude are you a serious dude about this dude being serious? Dude can't be serious, dude. Seriously dude."


RE: What a load of crap
By MrPoletski on 2/18/2009 8:43:16 AM , Rating: 2
dudelerious.


RE: What a load of crap
By JKflipflop98 on 2/21/2009 2:52:45 AM , Rating: 2
Dude. . .


RE: What a load of crap
By bohhad on 2/19/2009 2:10:47 PM , Rating: 2
SRSLY


RE: What a load of crap
By MamiyaOtaru on 2/22/2009 5:56:28 PM , Rating: 2
NO WAI


RE: What a load of crap
By gamerk2 on 2/17/2009 11:53:31 AM , Rating: 5
Actaully, some advanced programming languages allow for the code itself to be replaced automatically at run-time depending on certain variables, so its quite possible for code to be "written" without any human involvement.


RE: What a load of crap
By quiksilvr on 2/17/09, Rating: -1
RE: What a load of crap
By TSS on 2/17/2009 1:37:30 PM , Rating: 3
the problem isn't in telling the bots what to do. it's telling them what to do, having a malfunction, and them not stopping.

where already telling them to kill other humans, although to us those humans are "the enemy".

this discussion is about giving robots ethics, in other words, allowing them to seperate friend from foe themselves.

which, in my oppinion, is the first step to the terminator universe. hell we try to teach a robot ethics then a human orders him to go kill another human, but not *those* humans.

where in the process does the bot learn right from wrong? and what would the bot percieve as wrong? would a bot operating for nazi germany, percieve it ethically wrong to kill jews and rise up against their masters (which would be a terminator situation).

this debate is far more philosophical then just debating wether the code is *capable* of doing it.

any player of world of warcraft will tell you that eventually, code will start to get a mind of his own. i swear that game just wants me dead at times (out of the blue for no reason pulling an entire FIELD of npc's).


RE: What a load of crap
By rudolphna on 2/17/2009 2:12:45 PM , Rating: 1
Great post, exactly my thoughts. Oh, and on the WoW thing I know exactly what you speak of. Sometimes I think there is a blizz employee sitting at a screen screwing around and doing it on purpose, lol.


RE: What a load of crap
By rykerabel on 2/17/2009 3:02:11 PM , Rating: 2
RE: What a load of crap
By GaryJohnson on 2/17/2009 2:15:24 PM , Rating: 2
You had me up until "world of warcraft".


RE: What a load of crap
By TSS on 2/17/2009 6:21:10 PM , Rating: 2
it's good to know my ideas and observations can be nullified because i happen to enjoy a paticular silly game (which i quit 2 weeks back mind you).

here's why the comparison is valid: the NPC's, or Non Playable Charracters, are completly AI driven. a human told them to patrol that area, and attack anybody within 10 yards of range. atleast at a certain level, but that's no different then a certain level of threat a bot might experience in real life. otherwise, they are completly void of any human interaction.

there's a field of them each watching their own 10 yard space. so imagine my suprise when AI up to 200 yards away starts charging for me out of the blue to kill me.

suppose you have several AI bots patrolling your base (in real life). out of the blue they all attack everybody within sight, which they aren't supposed to.

this is the greatest fear of armed bots, and in WoW i've already seen it happen. the game consists of millions of lines of code, like real life bots, the NPC's have no human controlling them, like real life bots (the ones where discussing here atleast), they have a build-in response to threats, like real life bots, and they will engange if i pose a threat to them, like real life bots.

you might laugh now, because i mention world of warcraft. if this situation actually happens in 10-20 years, i'll laugh my ass off. people might hate me for it, but i'll laugh even harder at it. poetic justice i suppose.

and get my lvl 80 mage to kill the rogue bots, but that's a different discussion.


RE: What a load of crap
By jconan on 2/24/2009 12:30:09 AM , Rating: 2
yea there'll be a lot of friendly fires among allies when autonomous droids are deployed. wonder who'll be under the hotseat when this comes out??


RE: What a load of crap
By SleepyGreg on 2/17/2009 2:56:08 PM , Rating: 2
I think the concern is that the AI is going to have a learning ability (necessary to overcome new experiences out in the field) This ability to learn and write its own code based on experience coupled with millions of lines of existing human code and its inevitable bugs gives a potential for unpredictable outcomes. I'd say it's a very real threat. We're not just talking about a hardwired machine here, it's a dynamic evolving system with millions of permutations. And there will be "surprises"


RE: What a load of crap
By MozeeToby on 2/17/2009 1:51:36 PM , Rating: 2
Very, very true. Our military robots aren't just going to 'wake up' one day and decide to start killing everyone.

I could imagine, though, writing a learning algorithm to help the robot identify threats. If the robots could communicate their improvements to the algorithm (based on threats detected and destroyed for instance) it would only take one to robot to learn the wrong identifiers to bring down the whole network of robots.

Of course, even so I would think direct commands would still work. As long as there is a low level command to shut down the system (a command that doesn't go through the threat detection system) there shouldn't be a problem.


RE: What a load of crap
By cfaalm on 2/17/2009 5:01:05 PM , Rating: 2
Since this is so close to SF: If MS can build a firewall, then the robot could do that too, at some point in time ;-)

If all depends on code, how do you prevent the robot or your enemy from writing/adapting code to prevent a remote shut down. Could your enemy hack into the robot's system? So you want it to be open to you, but sealed tight to your enemies and the robot itself. To an intelligent being, that would feel like a mind prison: "We will tell you what to do, don't make up your own mind."


RE: What a load of crap
By MozeeToby on 2/17/2009 5:19:18 PM , Rating: 2
I didn't mean for my post to come of as Sci-Fi, let me explain more thoroughly my thoughts. I can imagine writing software that allows a swarm of robots to communicate with each other such that each robot can send information about was happening around them when they are destroyed.

This information could be used to build a set of rules about what is and isn't a dangerous situation. If you allow the robots a finite but limited list of behaviors (flee, attack, take cover, etc) they could try new things depending on the situation, record and/or broadcast the results of the engagement. Things that work get used more often, things that don't work get used less often.

Now all it takes is one bug in the program for a robot to identify civilians as enemies. Since every time the robot attacks an unarmed civilian it will probably win, this behavior could quickly become dominant, spreading like a virus to the other robots in the group.

What won't happen though is the robot changing it's own code or suddenly learning subjects that it doesn't have algorithms to learn. The robot won't re-write it's basic command system because the command system isn't designed to learn.

Basically, the robot is closed out of the command system because there isn't an algorithm that allows that behavior to be edited. The enemy is closed out because the commands would be sent via encrypted signals (no way, short of treason will those will be broken).


RE: What a load of crap
By croc on 2/17/2009 5:42:34 PM , Rating: 2
This whole topic gives me a new meaning for 'war-driving'...


RE: What a load of crap
By mindless1 on 2/17/2009 9:47:26 PM , Rating: 2
No way short of treason... because when you're seconds away from being tortured to death by the enemy, or THEIR robots, it would be unheard of to give up any info to save your own life, your main concern isn't that the other captives will give up the info anyway, it's that if you survive you might be found guilty of treason later (if the robots don't kill everyone anyway)?


RE: What a load of crap
By mindless1 on 2/17/2009 9:44:06 PM , Rating: 2
If they do what is suggested, yes, yes they will.

What is ultimately the most fair and ethical thing for the war robot to do? Kill all soldiers, anyone who poses any kind of potential threat to others by supporting, carrying, or being in any way involved with weapons.

The only /logical/ thing a robot could do is exterminate all who seek to engage in war, then keep warlike movements from gaining sufficient strength in the future.

If there is a low level command to shut down the system, aren't we opening up a huge security hole for the enemy robots to capture and exploit? Use very deep encryption perhaps? Unique identifiers and authentication keys adding to the complexity of the system so even fewer of those deploying, using, and designing them know what to do when things do wrong?

After all, if there's one thing we always have plenty of in a war zone, it's robotic engineers that can take out haywire killer robots.


RE: What a load of crap
By MrPoletski on 2/18/2009 8:45:53 AM , Rating: 2
cyberdyne systems were infected with a virus that game their robots autonomous thought.

While a virus giving robots autonomous thought is fanciful, the idea of these military robots contracting a virus is absolutely not.


RE: What a load of crap
By WayneG on 2/17/2009 1:05:00 PM , Rating: 2
So who made it in the first place?!? :O


RE: What a load of crap
By DEVGRU on 2/17/2009 10:48:05 AM , Rating: 2
quote:
Scientists have been pursuing Strong AI for decades *unsuccessfully* and now people fear it will happen accidentally because too many people are working on too many lines of code.


Yeah. Unless the military has a super-secret self-learning CPU somewhere, this is completely moot. The only way we are going to get to a 'Skynet situation' is if technology advances to the point where silicon and software are both able to change on their own with zero input from us squishies. Also, why does AI automatically mean 'death to humans'? I realize the article is talking about military AI/robotics, but I'd highly recommend reading any books of the 'Bolo' series by Keith Laumer. They're great reads with a prime example of AI that doesn't view humanity as something to be exterminated (and Bolo's themselves are weapons of war).


RE: What a load of crap
By djkrypplephite on 2/17/2009 1:09:59 PM , Rating: 3
It doesn't. The fact that they would be autonomously killing humans is the trouble. We can't guarantee that its software wouldn't go haywire for any reason and start killing everyone. Whether or not it thinks of itself in a sentient manner is not the issue. We're talking much more basic, as in how can we make it discern friend from foe (which we already can't, apparently), and if it has a bug, we can't stop it from firing on friendlies or civilians because it is autonomous.


RE: What a load of crap
By mindless1 on 2/17/2009 9:49:59 PM , Rating: 3
We're talking more basic than that. Set a DOD desktop computer in front of the enemy and see if they can get into it in any way, even if that's using the hardware because the data is encrypted.

If they can, it would be unreasonable to think that if we set a much newer computer design in a robot into their camp, that they won't have every reason to hack it.

Robot virus FTW!


RE: What a load of crap
By MrPoletski on 2/18/2009 8:51:08 AM , Rating: 2
quote:
Also, why does AI automatically mean 'death to humans'?


Have you seen how we behave?;)


RE: What a load of crap
By greylica on 2/17/2009 11:06:49 AM , Rating: 1
Yes, using Windows systems will be far better, at least, a Human hacker could take the command and install a new malware...

You will see the Real trojan Horses...


RE: What a load of crap
By The0ne on 2/17/2009 1:08:59 PM , Rating: 2
I used to have an interest in programming a smart AI too but had since given up. It takes patience and a lot of time. I agree with your comments though. There are plenty of talented people in the AI field and there are plenty of work going on around it but unfortunately a self learning or even very smart AI isn't anywhere near what most would like to think we be in. Same for robotics, although there have been some pretty innovative robot designs (not the AI itself) in recent years.

Having said that I'm not entirely sure if this article put such consideration before publishing it. Purely throwing out articles base on one person's comment, and limited comment at that, as news isn't wise. For one, it just stirrs the crazy to be even more crazy! :)


RE: What a load of crap
By Rodney McNaggerton on 2/17/2009 2:26:12 PM , Rating: 2
The idea isn't a load of crap, this article is.


RE: What a load of crap
By Gul Westfale on 2/17/2009 10:24:39 PM , Rating: 1
so a robot going "terminator"...

navybot: lower your gun and surrender!
enemy soldier: NO (starts shooting)
navybot: WHAT DON'T YOU FUCKING UNDERSTAND? YOU WANT ME TO FUCKING TRASH YOUR FUCKING LIGHTS? YOU'RE A FUCKING PRICK!
enemy soldier: why so serious?


RE: What a load of crap
By Azsen on 2/18/2009 3:30:35 PM , Rating: 2
Ahahahaha gold!

Dunno why they're rating you down.


RE: What a load of crap
By sweetsauce on 2/23/2009 2:09:20 PM , Rating: 2
you forgot

enemy soldier: you mad bro?


RE: What a load of crap
By Gideon on 2/18/2009 2:41:53 AM , Rating: 2
100% agree, that's why I think the article heading is terribly misguided (good for clicks though :D). No-one is actually talking about these robots creating a Strong AI out of the blue, rather about screwing up and causing friendly fire or doing something else unexpected causing human life loss.

With semi-autonomous robots actually pulling the trigger the risk of something like that happening is considerably higher than with previous complex military systems.


RE: What a load of crap
By MrPoletski on 2/18/2009 8:40:43 AM , Rating: 2
well at least we know that if we ever lose any of these robots..

that they'll be back...


RE: What a load of crap
By TheOneStorm on 2/19/2009 11:53:31 AM , Rating: 2
I'd fear ethical laws programming more than AI. The storyline depicted in "i, Robot" is in relation to ethical law programming moreso than robots turning against their humanoid creators in reference to the Terminator series.

In "i, Robot" the ethical programming of the machines became the point that humans themselves, living, is an endangerment of their own life. That robots must control our lives for our own sake because humans are susceptible to free will and have the possibility of doing wrong. Inevitably, hindering war and other human-on-human violence. How could ethical programming, in the sense of obtaining morality in forms of AI (which is what they're trying to accomplish), ever logically process the good in HELPING us in war, than backing away from war and defending itself?

I'm all for AI, and I could care less about these claims. I just thought I'd chime in because this is one of the first times I've heard of any military leadership mention "ethical programming" before.

I think MS has probably helped instill this fear of too many people working on a large codebase that could never be bug-free.
<joke>In my not-so-humble opinion, as long as MS isn't the creator of the AI "operating system", we should be fine.</joke> :)


RE: What a load of crap
By SiliconAddict on 2/20/2009 12:28:25 AM , Rating: 2
Hmmm and just because we have had decades of falures to create a real honest to God AI means that it will never happen? Look at computing power. Its come a hell of a lot closer in the past 10 years to the capability of the human brain then in the past 20. Look at our ability to program. How the hell do we know where the tipping point is between a program designed be intelligent in a narrow confine and one that has the ability to program itself. Something that starts doing thing could easily be seen as simply a programming error on the part of a human and overlooked. This is where the "code is getting to big" comes into play.
There is a certain amount of arrogance based on past failures in your statement that is exactly why an article like this should be "considered" No I'm not talking OMG! RUN FOR THE HILLS! THE ROBOTS ARE GONNA GET US! DESTROY ALL TECH NOW BEFORE ITS TOO LATE!!!!111oneoneone
But start taking this seriously as we move forward.


RE: What a load of crap
By Larrymon2000 on 2/23/2009 1:24:36 AM , Rating: 2
AI is possible, but I think the program of self-learning and run-time self-programming new behavior is a different story. I mean, you can create a set of choices and then use sensory information to effect a decision by the system, but providing ways to adapt new approaches is a different thing. Of course, one could argue that we could just provide an infinitely large set of atomic subroutines and then let the system arrange the order of execution and the choice of subroutines as it's running. In essence, this really isn't *too* different from humans.


"Folks that want porn can buy an Android phone." -- Steve Jobs

Related Articles
Dawn of the Drones
June 6, 2008, 6:15 PM
War Robots Still in Iraq
April 17, 2008, 10:20 AM
Can Robots Commit War Crimes?
February 29, 2008, 2:37 PM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki