backtop


Print 115 comment(s) - last by jconan.. on Feb 24 at 12:30 AM


A new Navy-funded report warns against a hasty deployment of war robots, and urges programmers to include ethics subroutines -- a warrior code of sorts. The alternative they say, is the possibility of a robotic atrocity, akin to the Terminator or other sci-fi movies.  (Source: Warner Brothers)
Robots must learn to obey a warrior code, but increasing intelligence may make keeping the robots from turning on their masters increasingly difficult

Robots gone rogue killing their human masters is rich science fiction fodder, but could it become reality?  Some researchers are beginning to ask that question as artificial intelligence advances continue, and the world's high-tech nations begin to deploy war-robots to the battlefront.  Currently, the U.S. armed forces use many robots, but they all ultimately have a human behind the trigger.  However, there are many plans to develop and deploy fully independent solutions as the technology improves.

Some mistakenly believe that such robots would only be able to operate within a defined set of behaviors.  Describes Patrick Lin, the chief compiler of a new U.S. Navy-funded report, "There is a common misconception that robots will do only what we have programmed them to do.  Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person."

The new report points out that the size of artificial intelligence projects will likely make their code impossible to fully analyze and dissect for possible dangers.  With hundreds of programmers working on millions of lines of code for a single war robot, says Dr. Lin, no one has a clear understanding of what going on, at a small scale, across the entire code base.

He says the key to avoiding robotic rebellion is to include "learning" logic which teaches the robot the rights and wrongs of ethical warfare.  This logic would be mixed with traditional rules based programming. 

The new report looks at many issues surrounding the field of killer robots.  In addition to code malfunction, another potential threat would be a terrorist attack which reprogrammed the robots, turning them on their owners.  And one tricky issue discussed is the question of who would take the blame for a robotic atrocity -- the robot, the programmers, the military, or the U.S. President.

The Ethics and Emerging Technology department of California State Polytechnic University created the report of the U.S. Navy's Office of Naval Research.  It warns the Navy about the dangers of premature deployment or complacency on potential issues.  U.S. Congress has currently mandated that by 2010 a "deep strike" unmanned aircraft must be operational, and by 2015 on third of the ground combat vehicles must be unmanned.

The report warns, "A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems . . . there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives."

Simple laws of ethics, such as Isaac Asimov's three laws of robotics, the first of which forbids robots from harming humans, will not be sufficient, say the report's authors.  War robots will have to kill, but they will have to understand the difference between enemies and noncombatants.  Dr. Lin describes this challenge stating, "We are going to need a code.  These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code."

The U.S. Army had a scare earlier this year when a software malfunction caused war robots deployed in the field to aim at friendly targets.  While the humans still had control of the trigger, the incident highlighted the challenges a fully autonomous system would face.  The offending robots were serviced and are still deployed in Iraq.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

What a load of crap
By tim851 on 2/17/2009 9:52:48 AM , Rating: 5
Scientist have been pursuing Strong AI for decades *unsuccessfully* and now people fear it will happen accidentally because too many people are working on too many lines of code.

Sure, that's why all these Linux distros have gone C3PO on us...




RE: What a load of crap
By Rugar on 2/17/2009 10:05:21 AM , Rating: 2
Heh... I wish I could +1 you for the C3PO. Clearly this is about to happen because Cyberdyne found technology from the future and incorporated it into our new killer robots.


RE: What a load of crap
By quiksilvr on 2/17/2009 10:20:39 AM , Rating: 2
Oh for God's sake. A.I. CANNOT MAKE THEIR OWN ALGORITHMS WITHOUT US EITHER PUTTING IT THERE OR TELLING IT TO MAKE ONE THEMSELVES. How can an A.I. that was originally designed to receive orders, shoot and move magically gets insane processing power from the air and decides: "I'm gonna doughnut across the desert!" and just randomly writes its own line of code saying so? It doesn't make any goddamn sense! If it does end up doing a doughnut across the desert, 99.9999999% of the time, it's just buggy and the programmer screwed up somewhere.


RE: What a load of crap
By Rugar on 2/17/2009 10:35:22 AM , Rating: 2
Wow... That so totally had to do with my comment.

And by the way, it's because of the reverse engineered chips!


RE: What a load of crap
By callmeroy on 2/17/2009 11:33:40 AM , Rating: 5
First...enough already with the dramatised replies to such articles.

My hunch, albeit just a hunch...tells me the team researching this and writing this kind of code are a notch above the average run of the mill programmer who just got their degree from. Now I don't know anyone personally on these forums - so perhaps some of you are akin to a programming God , maybe you have multiple PhD's, perhaps you already have the foundations down for designing a time machine, curing cancer and solving the problem with world hunger.....BUT I think you also might just be overly down playing the skills of these folks and their knowledge just a tad.

I'm sure there's more to it that what we've already discussed here...


RE: What a load of crap
By Rugar on 2/17/09, Rating: -1
RE: What a load of crap
By arazok on 2/17/2009 2:53:01 PM , Rating: 5
quote:
Do you seriously think that I am actually suggesting that there is a company named Cyberdyne which reverse engineered a chip taken from the arm of a destroyed T-800 sent back in time to kill Sarah Connor?


That’s what I thought. You have no idea how relieved I am to know you weren’t serious.


RE: What a load of crap
By Seemonkeyscanfly on 2/17/2009 5:23:03 PM , Rating: 2
quote:
TextDo you seriously think that I am actually suggesting that there is a company named Cyberdyne which reverse engineered a chip taken from the arm of a destroyed T-800 sent back in time to kill Sarah Connor?

quote:
TextThat’s what I thought. You have no idea how relieved I am to know you weren’t serious.


Yea, I was worried about that too. In real life the company is call Cybertech Autonomics LLC. They had to change the name in the movie. The movie did not want to pay for the royalty rates to use the real name of the company. :)


RE: What a load of crap
By bigboxes on 2/17/2009 3:39:25 PM , Rating: 5
Are you serious about being serious about this dude being serious? You can't be serious. Seriously.


RE: What a load of crap
By Seemonkeyscanfly on 2/17/2009 5:27:59 PM , Rating: 5
I think you missed a great dude opertunity....

new quote: "Dude are you a serious dude about this dude being serious? Dude can't be serious, dude. Seriously dude."


RE: What a load of crap
By MrPoletski on 2/18/2009 8:43:16 AM , Rating: 2
dudelerious.


RE: What a load of crap
By JKflipflop98 on 2/21/2009 2:52:45 AM , Rating: 2
Dude. . .


RE: What a load of crap
By bohhad on 2/19/2009 2:10:47 PM , Rating: 2
SRSLY


RE: What a load of crap
By MamiyaOtaru on 2/22/2009 5:56:28 PM , Rating: 2
NO WAI


RE: What a load of crap
By gamerk2 on 2/17/2009 11:53:31 AM , Rating: 5
Actaully, some advanced programming languages allow for the code itself to be replaced automatically at run-time depending on certain variables, so its quite possible for code to be "written" without any human involvement.


RE: What a load of crap
By quiksilvr on 2/17/09, Rating: -1
RE: What a load of crap
By TSS on 2/17/2009 1:37:30 PM , Rating: 3
the problem isn't in telling the bots what to do. it's telling them what to do, having a malfunction, and them not stopping.

where already telling them to kill other humans, although to us those humans are "the enemy".

this discussion is about giving robots ethics, in other words, allowing them to seperate friend from foe themselves.

which, in my oppinion, is the first step to the terminator universe. hell we try to teach a robot ethics then a human orders him to go kill another human, but not *those* humans.

where in the process does the bot learn right from wrong? and what would the bot percieve as wrong? would a bot operating for nazi germany, percieve it ethically wrong to kill jews and rise up against their masters (which would be a terminator situation).

this debate is far more philosophical then just debating wether the code is *capable* of doing it.

any player of world of warcraft will tell you that eventually, code will start to get a mind of his own. i swear that game just wants me dead at times (out of the blue for no reason pulling an entire FIELD of npc's).


RE: What a load of crap
By rudolphna on 2/17/2009 2:12:45 PM , Rating: 1
Great post, exactly my thoughts. Oh, and on the WoW thing I know exactly what you speak of. Sometimes I think there is a blizz employee sitting at a screen screwing around and doing it on purpose, lol.


RE: What a load of crap
By rykerabel on 2/17/2009 3:02:11 PM , Rating: 2
RE: What a load of crap
By GaryJohnson on 2/17/2009 2:15:24 PM , Rating: 2
You had me up until "world of warcraft".


RE: What a load of crap
By TSS on 2/17/2009 6:21:10 PM , Rating: 2
it's good to know my ideas and observations can be nullified because i happen to enjoy a paticular silly game (which i quit 2 weeks back mind you).

here's why the comparison is valid: the NPC's, or Non Playable Charracters, are completly AI driven. a human told them to patrol that area, and attack anybody within 10 yards of range. atleast at a certain level, but that's no different then a certain level of threat a bot might experience in real life. otherwise, they are completly void of any human interaction.

there's a field of them each watching their own 10 yard space. so imagine my suprise when AI up to 200 yards away starts charging for me out of the blue to kill me.

suppose you have several AI bots patrolling your base (in real life). out of the blue they all attack everybody within sight, which they aren't supposed to.

this is the greatest fear of armed bots, and in WoW i've already seen it happen. the game consists of millions of lines of code, like real life bots, the NPC's have no human controlling them, like real life bots (the ones where discussing here atleast), they have a build-in response to threats, like real life bots, and they will engange if i pose a threat to them, like real life bots.

you might laugh now, because i mention world of warcraft. if this situation actually happens in 10-20 years, i'll laugh my ass off. people might hate me for it, but i'll laugh even harder at it. poetic justice i suppose.

and get my lvl 80 mage to kill the rogue bots, but that's a different discussion.


RE: What a load of crap
By jconan on 2/24/2009 12:30:09 AM , Rating: 2
yea there'll be a lot of friendly fires among allies when autonomous droids are deployed. wonder who'll be under the hotseat when this comes out??


RE: What a load of crap
By SleepyGreg on 2/17/2009 2:56:08 PM , Rating: 2
I think the concern is that the AI is going to have a learning ability (necessary to overcome new experiences out in the field) This ability to learn and write its own code based on experience coupled with millions of lines of existing human code and its inevitable bugs gives a potential for unpredictable outcomes. I'd say it's a very real threat. We're not just talking about a hardwired machine here, it's a dynamic evolving system with millions of permutations. And there will be "surprises"


RE: What a load of crap
By MozeeToby on 2/17/2009 1:51:36 PM , Rating: 2
Very, very true. Our military robots aren't just going to 'wake up' one day and decide to start killing everyone.

I could imagine, though, writing a learning algorithm to help the robot identify threats. If the robots could communicate their improvements to the algorithm (based on threats detected and destroyed for instance) it would only take one to robot to learn the wrong identifiers to bring down the whole network of robots.

Of course, even so I would think direct commands would still work. As long as there is a low level command to shut down the system (a command that doesn't go through the threat detection system) there shouldn't be a problem.


RE: What a load of crap
By cfaalm on 2/17/2009 5:01:05 PM , Rating: 2
Since this is so close to SF: If MS can build a firewall, then the robot could do that too, at some point in time ;-)

If all depends on code, how do you prevent the robot or your enemy from writing/adapting code to prevent a remote shut down. Could your enemy hack into the robot's system? So you want it to be open to you, but sealed tight to your enemies and the robot itself. To an intelligent being, that would feel like a mind prison: "We will tell you what to do, don't make up your own mind."


RE: What a load of crap
By MozeeToby on 2/17/2009 5:19:18 PM , Rating: 2
I didn't mean for my post to come of as Sci-Fi, let me explain more thoroughly my thoughts. I can imagine writing software that allows a swarm of robots to communicate with each other such that each robot can send information about was happening around them when they are destroyed.

This information could be used to build a set of rules about what is and isn't a dangerous situation. If you allow the robots a finite but limited list of behaviors (flee, attack, take cover, etc) they could try new things depending on the situation, record and/or broadcast the results of the engagement. Things that work get used more often, things that don't work get used less often.

Now all it takes is one bug in the program for a robot to identify civilians as enemies. Since every time the robot attacks an unarmed civilian it will probably win, this behavior could quickly become dominant, spreading like a virus to the other robots in the group.

What won't happen though is the robot changing it's own code or suddenly learning subjects that it doesn't have algorithms to learn. The robot won't re-write it's basic command system because the command system isn't designed to learn.

Basically, the robot is closed out of the command system because there isn't an algorithm that allows that behavior to be edited. The enemy is closed out because the commands would be sent via encrypted signals (no way, short of treason will those will be broken).


RE: What a load of crap
By croc on 2/17/2009 5:42:34 PM , Rating: 2
This whole topic gives me a new meaning for 'war-driving'...


RE: What a load of crap
By mindless1 on 2/17/2009 9:47:26 PM , Rating: 2
No way short of treason... because when you're seconds away from being tortured to death by the enemy, or THEIR robots, it would be unheard of to give up any info to save your own life, your main concern isn't that the other captives will give up the info anyway, it's that if you survive you might be found guilty of treason later (if the robots don't kill everyone anyway)?


RE: What a load of crap
By mindless1 on 2/17/2009 9:44:06 PM , Rating: 2
If they do what is suggested, yes, yes they will.

What is ultimately the most fair and ethical thing for the war robot to do? Kill all soldiers, anyone who poses any kind of potential threat to others by supporting, carrying, or being in any way involved with weapons.

The only /logical/ thing a robot could do is exterminate all who seek to engage in war, then keep warlike movements from gaining sufficient strength in the future.

If there is a low level command to shut down the system, aren't we opening up a huge security hole for the enemy robots to capture and exploit? Use very deep encryption perhaps? Unique identifiers and authentication keys adding to the complexity of the system so even fewer of those deploying, using, and designing them know what to do when things do wrong?

After all, if there's one thing we always have plenty of in a war zone, it's robotic engineers that can take out haywire killer robots.


RE: What a load of crap
By MrPoletski on 2/18/2009 8:45:53 AM , Rating: 2
cyberdyne systems were infected with a virus that game their robots autonomous thought.

While a virus giving robots autonomous thought is fanciful, the idea of these military robots contracting a virus is absolutely not.


RE: What a load of crap
By WayneG on 2/17/2009 1:05:00 PM , Rating: 2
So who made it in the first place?!? :O


RE: What a load of crap
By DEVGRU on 2/17/2009 10:48:05 AM , Rating: 2
quote:
Scientists have been pursuing Strong AI for decades *unsuccessfully* and now people fear it will happen accidentally because too many people are working on too many lines of code.


Yeah. Unless the military has a super-secret self-learning CPU somewhere, this is completely moot. The only way we are going to get to a 'Skynet situation' is if technology advances to the point where silicon and software are both able to change on their own with zero input from us squishies. Also, why does AI automatically mean 'death to humans'? I realize the article is talking about military AI/robotics, but I'd highly recommend reading any books of the 'Bolo' series by Keith Laumer. They're great reads with a prime example of AI that doesn't view humanity as something to be exterminated (and Bolo's themselves are weapons of war).


RE: What a load of crap
By djkrypplephite on 2/17/2009 1:09:59 PM , Rating: 3
It doesn't. The fact that they would be autonomously killing humans is the trouble. We can't guarantee that its software wouldn't go haywire for any reason and start killing everyone. Whether or not it thinks of itself in a sentient manner is not the issue. We're talking much more basic, as in how can we make it discern friend from foe (which we already can't, apparently), and if it has a bug, we can't stop it from firing on friendlies or civilians because it is autonomous.


RE: What a load of crap
By mindless1 on 2/17/2009 9:49:59 PM , Rating: 3
We're talking more basic than that. Set a DOD desktop computer in front of the enemy and see if they can get into it in any way, even if that's using the hardware because the data is encrypted.

If they can, it would be unreasonable to think that if we set a much newer computer design in a robot into their camp, that they won't have every reason to hack it.

Robot virus FTW!


RE: What a load of crap
By MrPoletski on 2/18/2009 8:51:08 AM , Rating: 2
quote:
Also, why does AI automatically mean 'death to humans'?


Have you seen how we behave?;)


RE: What a load of crap
By greylica on 2/17/2009 11:06:49 AM , Rating: 1
Yes, using Windows systems will be far better, at least, a Human hacker could take the command and install a new malware...

You will see the Real trojan Horses...


RE: What a load of crap
By The0ne on 2/17/2009 1:08:59 PM , Rating: 2
I used to have an interest in programming a smart AI too but had since given up. It takes patience and a lot of time. I agree with your comments though. There are plenty of talented people in the AI field and there are plenty of work going on around it but unfortunately a self learning or even very smart AI isn't anywhere near what most would like to think we be in. Same for robotics, although there have been some pretty innovative robot designs (not the AI itself) in recent years.

Having said that I'm not entirely sure if this article put such consideration before publishing it. Purely throwing out articles base on one person's comment, and limited comment at that, as news isn't wise. For one, it just stirrs the crazy to be even more crazy! :)


RE: What a load of crap
By Rodney McNaggerton on 2/17/2009 2:26:12 PM , Rating: 2
The idea isn't a load of crap, this article is.


RE: What a load of crap
By Gul Westfale on 2/17/2009 10:24:39 PM , Rating: 1
so a robot going "terminator"...

navybot: lower your gun and surrender!
enemy soldier: NO (starts shooting)
navybot: WHAT DON'T YOU FUCKING UNDERSTAND? YOU WANT ME TO FUCKING TRASH YOUR FUCKING LIGHTS? YOU'RE A FUCKING PRICK!
enemy soldier: why so serious?


RE: What a load of crap
By Azsen on 2/18/2009 3:30:35 PM , Rating: 2
Ahahahaha gold!

Dunno why they're rating you down.


RE: What a load of crap
By sweetsauce on 2/23/2009 2:09:20 PM , Rating: 2
you forgot

enemy soldier: you mad bro?


RE: What a load of crap
By Gideon on 2/18/2009 2:41:53 AM , Rating: 2
100% agree, that's why I think the article heading is terribly misguided (good for clicks though :D). No-one is actually talking about these robots creating a Strong AI out of the blue, rather about screwing up and causing friendly fire or doing something else unexpected causing human life loss.

With semi-autonomous robots actually pulling the trigger the risk of something like that happening is considerably higher than with previous complex military systems.


RE: What a load of crap
By MrPoletski on 2/18/2009 8:40:43 AM , Rating: 2
well at least we know that if we ever lose any of these robots..

that they'll be back...


RE: What a load of crap
By TheOneStorm on 2/19/2009 11:53:31 AM , Rating: 2
I'd fear ethical laws programming more than AI. The storyline depicted in "i, Robot" is in relation to ethical law programming moreso than robots turning against their humanoid creators in reference to the Terminator series.

In "i, Robot" the ethical programming of the machines became the point that humans themselves, living, is an endangerment of their own life. That robots must control our lives for our own sake because humans are susceptible to free will and have the possibility of doing wrong. Inevitably, hindering war and other human-on-human violence. How could ethical programming, in the sense of obtaining morality in forms of AI (which is what they're trying to accomplish), ever logically process the good in HELPING us in war, than backing away from war and defending itself?

I'm all for AI, and I could care less about these claims. I just thought I'd chime in because this is one of the first times I've heard of any military leadership mention "ethical programming" before.

I think MS has probably helped instill this fear of too many people working on a large codebase that could never be bug-free.
<joke>In my not-so-humble opinion, as long as MS isn't the creator of the AI "operating system", we should be fine.</joke> :)


RE: What a load of crap
By SiliconAddict on 2/20/2009 12:28:25 AM , Rating: 2
Hmmm and just because we have had decades of falures to create a real honest to God AI means that it will never happen? Look at computing power. Its come a hell of a lot closer in the past 10 years to the capability of the human brain then in the past 20. Look at our ability to program. How the hell do we know where the tipping point is between a program designed be intelligent in a narrow confine and one that has the ability to program itself. Something that starts doing thing could easily be seen as simply a programming error on the part of a human and overlooked. This is where the "code is getting to big" comes into play.
There is a certain amount of arrogance based on past failures in your statement that is exactly why an article like this should be "considered" No I'm not talking OMG! RUN FOR THE HILLS! THE ROBOTS ARE GONNA GET US! DESTROY ALL TECH NOW BEFORE ITS TOO LATE!!!!111oneoneone
But start taking this seriously as we move forward.


RE: What a load of crap
By Larrymon2000 on 2/23/2009 1:24:36 AM , Rating: 2
AI is possible, but I think the program of self-learning and run-time self-programming new behavior is a different story. I mean, you can create a set of choices and then use sensory information to effect a decision by the system, but providing ways to adapt new approaches is a different thing. Of course, one could argue that we could just provide an infinitely large set of atomic subroutines and then let the system arrange the order of execution and the choice of subroutines as it's running. In essence, this really isn't *too* different from humans.


Stoopid
By lebe0024 on 2/17/2009 10:25:31 AM , Rating: 2
FTFA: "There is a common misconception that robots will do only what we have programmed them to do."

Show me any program that doesn't do what it was programmed to do.




RE: Stoopid
By Moohbear on 2/17/2009 11:10:51 AM , Rating: 4
Well, there's what the program is intended to do and there's what it actually does. And then, there are just plain old bugs... That leaves a lot of wiggle room for unexpected behavior, which is a little disturbing when the said computer is wielding a gun.


RE: Stoopid
By callmeroy on 2/17/2009 11:26:33 AM , Rating: 3
Are you serious dude?

There are TONS of programs that do OTHER than what the programmer designed, wrote or intended the program to do. It's called having bugs in the code.

My counter question is name me ONE piece of software (reasonably speaking here ...don't tell me your 9th grade VB code for "Hello World" is bug free) of significant popularity in the work place (government or private take your pick) that doesn't have at least one bug that was unexpected.


RE: Stoopid
By Divide Overflow on 2/17/2009 1:34:46 PM , Rating: 2
Intentions aren't worth spit. If there's a bug in the code, the machine is still executing it's programming perfectly.


RE: Stoopid
By mindless1 on 2/17/2009 9:54:06 PM , Rating: 2
If the hardware is also perfect, and in perfect working condition. In unknown state of combat damage.


RE: Stoopid
By Fritzr on 2/18/2009 12:44:10 AM , Rating: 2
Let's put the bug in the target selection code, place the fully armed bot in downtown Wash. DC and set it loose.

A bug can sometimes be an unanticipated feature :P

Remember the point of the article is that when an autonomous weapons system is placed in the field there is a danger of the weapon system engaging targets that it is supposed to protect. All it takes is one or two mistyped characters in the code. You can add safeguards by requiring multiple sources of target verification before allowing damage to the target.

Regardless of safeguards and cutouts, the system will be REQUIRED to damage targets designated unfriendly. What makes a target unfriendly? Weapon pointed at robot? Supporting troops really need to watch the way they hold their rifles. Wrong uniform? So the enemy changes into civvies. Carrying a weapon & not friendly uniform? Put on an outfit that looks friendly, walk up to robot and apply shaped charge with delay fuse and walk away.

These and many other real life situations will be faced by a real combat robot. Now will you guarantee all the code involved in decision making to be error free? Look at medical equipment today. Due to the danger to patients the code in medical devices is heavily bugchecked and tested. In spite of this there have been devices, including a robotic irradiation machine deployed with deadly bugs.

Yes it is science fiction. NOT science fantasy. Real hard science fiction is building a story around reasonable extrapolations of what can be done if selected advances and/or discoveries become reality. Geo-sync satellite? Arthur C. Clarke. Water bed? Robert A. Heinlein. Many devices in use today, including cell phones, personal computers, pocket music players, internet, video phone etc. were "invented" by science fiction authors who then wrote a story that included the idea that the fanciful device was an everday item.


RE: Stoopid
By Moishe on 2/17/2009 4:05:48 PM , Rating: 2
A bug is unintended but it is still the result of specific programming. No "bug" is going to make the difference between insane and intelligent killer robot and fluffy the clownbot on a leash.

Bugs can be weeded out and fixed before production if given enough time and money. A self-aware robot is not the result of a bug.


RE: Stoopid
By mindless1 on 2/17/2009 9:57:06 PM , Rating: 2
When has anything been given enough time and money to reach this nirvana?

A "bug" can mean many things, like a security flaw the enemy found that we didn't, and you can bet they'll be looking for some.


RE: Stoopid
By MozeeToby on 2/17/2009 2:16:26 PM , Rating: 2
Chess AI

Granted, you could argue that the program was written to play chess but I would argue that the program plays chess an order of magnitude better than the programmer.

The programmer didn't program every specific situation into the program nor program every specific strategy. There are loads of programs with emergent behavior, behavior that wasn't coded for but is an unexpected result of the code. The situation is very common with learning algorithms and can often produce very unusual, unexpected behaviors.


RE: Stoopid
By Yames on 2/17/2009 6:04:12 PM , Rating: 2
Chess AI will "play" better than its programmer, but it "thinks" differently. The basics of the algorithm are not that complicated. The AI plays ahead as many different moves as it can. Weights are assigned to the outcomes (the heart of the algorithm), and the outcome with the most weight is chosen. Of course weaknesses are built in so we are not obliterated by good algorithms. Only Grand Masters stand a change against these when they are not restricted. Perhaps the unexpected behavior you are thinking of is expected as a function of the restrictions.

Regardless, the Chess AI has a very limited scope and a single programmer can/should understand it in its entirety. AI for Warbots is another story.


RE: Stoopid
By xRyanCat on 2/17/2009 7:26:25 PM , Rating: 2
The Chess AI is only better because it's mathematically faster... Just as a computer would be better at multiplying 125 * 438. The programmer could do it, it would just take longer.

And unless programmed to do otherwise the AI will always output the same moves based on the situations it encounters.

Of course computers can do many things that humans can't, but they can't do anything that we can't envision or that we haven't programmed them to do. The Chess "AI" is more Superficial then Artificial. It doesn't "Think" and make choices that deviate from its programming path.


RE: Stoopid
By Larrymon2000 on 2/23/2009 1:27:46 AM , Rating: 2
You're kidding right? You know race conditions exist right? You know that in highly multi-threaded applications, the outcome is almost impossible to determine if there are enough concurrent threads running asynchronously on the same shared pool of data. Of course, programmers ALWAYS work to avoid this type of thing, so it's uncommon. But you could implement race conditions in AI for instance. Create a circuit. Under different circumstances, different paths drive the signal at different times.


RE: Stoopid
By Larrymon2000 on 2/23/2009 1:31:13 AM , Rating: 2
But about the whole chess thing, that's just silly. The programmer who developed the algorithm for the chess game will inherently understand what moves to make so that the computer CAN'T produce the right decision for it. It's just a set of analyses based on the situation and simulating the outcomes and finding the optimal ones. Just implementation of combinatorics and optimization. Why do you think Blue Gene is good at it? Because it can go through an infinite number of cases and deduce the best outcome. But it's not learning. It's not developing new strategies and fundamentally changing its algorithms using run-time reflection.


RE: Stoopid
By SiliconAddict on 2/20/2009 12:30:36 AM , Rating: 2
Windows ME.


Isaac Asimov
By gcason on 2/17/2009 1:38:46 PM , Rating: 2
Asimov already covered this in the 40's with his three laws of Robotics.




RE: Isaac Asimov
By JS on 2/17/2009 1:51:04 PM , Rating: 5
Yeah, well, robots who refuse to hurt humans won't be very useful in combat.


RE: Isaac Asimov
By Fritzr on 2/18/2009 11:36:31 AM , Rating: 2
Asimov's Robots could and did kill. The Zeroth law allowed killing of humans when it was necessary to prevent a greater harm to humanity. This 4th Law was added when R. Daneel Olivaw shows up in the Empire Trilogy.

In the earlier books with only Laws 1,2 & 3 operating, hardware and software errors made it possible for a robot to harm or kill human beings.

When you're looking at perfectly functioning code installed in a warbot you need to consider combat damage. The enemy will not check the User's Manual to see what damage they are allowed to infict :P


RE: Isaac Asimov
By JS on 2/20/2009 3:40:03 PM , Rating: 2
quote:
The Zeroth law allowed killing of humans when it was necessary to prevent a greater harm to humanity.


I have my doubts as to how often that law would kick in for robots serving in current US military operations.


RE: Isaac Asimov
By KristopherKubicki (blog) on 2/17/2009 3:15:43 PM , Rating: 3
... and Asimov still managed to write hundreds of stories about how those laws could be implemented poorly, or where they were in conflict :) Looks like we have a way to go still.


RE: Isaac Asimov
By glitchc on 2/17/2009 9:43:59 PM , Rating: 2
... and some of us still derive great pleasure in reading them. Kudos!


RE: Isaac Asimov
By mindless1 on 2/17/2009 9:59:40 PM , Rating: 2
Then if he has all the answers we just have to find a way to clone enough copies of him to do all the work, educate them all, etc., but with a different education suddenly it's not Isaac anymore except in basic DNA.


Why...
By Cerberus90 on 2/17/2009 12:23:42 PM , Rating: 3
can't they just equip all friendly soldiers with a chip, and program the robots 'NOT' to fire at those chips.

Surely that would be better than teaching it about ethics, as then it might not shoot anyone, and sit down in the middle of the battle and start wondering about the meaning of life.




RE: Why...
By Schrag4 on 2/17/2009 1:13:54 PM , Rating: 2
Friendly fire isn't the only concern here. There's quite an emphasis on leaving civilians unharmed as well. And if you try to hand these chips out to civilians, the enemy's military will end up with them, making the whole chip system meaningless.


RE: Why...
By TSS on 2/17/2009 6:34:08 PM , Rating: 2
that'll work great once the enemy captures one of your bots and reprograms it to shoot everybody with a chip.

while your own bots have to be limited to watch out for civilians, their reprogrammed bots can be let loose in even the most crowded of areas.


RE: Why...
By mindless1 on 2/17/2009 10:02:29 PM , Rating: 2
Presumably the chip is planted under the skin, right? That way, when the enemy captures some solders they don't just take them captive, first thing to do is cut out the chip for their own use, or of course just create fake duplicate chips. Such tech might work well for awhile against 3rd world countries but against those we have less need for robots this advanced.


RE: Why...
By Fritzr on 2/18/2009 11:44:38 AM , Rating: 2
An actively replying IFF chip is a targeting device. Just set up an automated weapon with a directional chip detector. Turn it on and then keep an eye out for enemy sappers trying to kill your weapon. The enemy's IFF chips tend to insure that targets are found and served :)


RE: Why...
By mindless1 on 2/18/2009 8:04:54 PM , Rating: 2
Except we were talking about chips that protect our own troops, and our robots not yet knowing if the chip is in one of ours or theirs because the chip is the identification device itself.


RE: Why...
By Fritzr on 2/19/2009 10:00:46 PM , Rating: 2
Put the chips in the field and the other side will use them as beacons. Just like any other IFF, you ping all devices and shoot the ones that reply "not your friend".

Cloning the other side's chips is just repeating the millennia old trick of dressing infiltrators in enemy uniforms before sending them across enemy lines.


RE: Why...
By SpaceOddity85 on 2/18/2009 2:51:13 AM , Rating: 2
Ever seen Screamers?


So...keep the human behind the trigger.
By Schrag4 on 2/17/2009 10:25:09 AM , Rating: 2
What's wrong with having humans still ultimately decide who lives and dies, but let the bots be in the line of fire? We don't have to give these bots the responsibility of deciding whether or not to pull the trigger.

Shoot, we could even develop a system where us humans basically tag 20 people out of a group of 1000 to kill and the bots, on command, kill all 20 nearly simultaneously. The bots wouldn't have to decide to pull the trigger on any of those people, but they'd still be able to get the job done in an extremely efficient manner. Why do they have to have judgement built in?

I suppose even the system I propose is a slippery slope...




RE: So...keep the human behind the trigger.
By Steve1981 on 2/17/2009 12:17:23 PM , Rating: 2
quote:
What's wrong with having humans still ultimately decide who lives and dies, but let the bots be in the line of fire?


Because in that situation you have to be able to communicate effectively with the robots. This communication can be jammed rendering your robots worthless, or worse, it can be hacked, and your robots can turn against you.

Against the advanced forces of the Taliban, it isn't a big deal. In a fight with someone a little more sophisticated, it can pose a problem.


RE: So...keep the human behind the trigger.
By Schrag4 on 2/17/2009 1:07:24 PM , Rating: 2
I agree, the main concern would be preventing the enemy from taking control of the robots. But I still think humans should be pulling the tigger here. Not only that, but the rewards of using these bots would likely far outweigh the risk of them being taken over. Once one is compromised, shut them all down until the proper modifications can be made. Then use them again until one gets compromosed, shut them down, etc etc. You could use these with humans controlling them without being stupid and reckless about it.


RE: So...keep the human behind the trigger.
By mindless1 on 2/17/2009 10:04:09 PM , Rating: 2
"OK mr enemy, don't shoot, I'll just be collecting my robots and will meet you back here tomorrow after lunch".


RE: So...keep the human behind the trigger.
By Schrag4 on 2/18/2009 9:55:13 AM , Rating: 2
Obviously you would DESTROY the handful that you had in combat at the moment and STOP DEPLOYING them until you got the issue resolved. It's not that complicated. Collect them? Again, don't be stupid and reckless by deploying all at once, only a handful at a time. Sheesh...


By mindless1 on 2/18/2009 8:03:12 PM , Rating: 2
You can't just go and destroy enough robots to have been effective when they cost millions each, then be criticized by the tree-huggers for littering.

Plus, if you blow up a group of self-learning robots, who do you think the rest of the robots will see as the real enemy?


By Fritzr on 2/18/2009 11:48:49 AM , Rating: 2
That's the system they have today. The only thing you have added is the ability to select a target and delay execution.

You hear about the airborn drones today. There are also ground drones that are small RC or cabled robots that can carry weapons.


no warbots
By MadMan007 on 2/17/2009 10:05:07 AM , Rating: 5
If a war isn't worth dying for it's not worth fighting. It's one thing to use them for reconaissance or as supplements alongside combat troops but the idea of robots performing actual combat is disturbing. Without a human cost to war there is much less restraint on what leaders, good and bad, might choose.

Welcome to 1984, constant war with no casualties so fewer objections, only in reality and without the entirely falsified media which is impossible in today's world.




RE: no warbots
By MikeO on 2/18/2009 7:23:57 AM , Rating: 2
It's ironic that the most sensible thing said in these posts comes from someone named MadMan007 :)


RE: no warbots
By MadMan007 on 2/18/2009 11:25:18 AM , Rating: 2
Sometimes it's the crazy people who are the most sane! weee...


Good Freakin Luck
By Rugar on 2/17/2009 10:01:18 AM , Rating: 2
"He says the key to avoiding robotic rebellion is to include "learning" logic which teaches the robot the rights and wrongs of ethical warfare. This logic would be mixed with traditional rules based programming."

Right. "Ethical Warfare"? That's the best oxymoron I've ever heard. Just a few questions:

1) Who gets to define the "ethics" included in the programing?
2) If an AI decides it is a conscientious objector, is it taken out of service?
3) And of course, the eternal question in one of its many forms... If by killing a dozen random people you can be 100% sure of killing a terrorist that is prepared to kill thousands, which would an ethical AI choose?




RE: Good Freakin Luck
By Clienthes on 2/18/2009 5:27:35 AM , Rating: 2
quote:
Right. "Ethical Warfare"? That's the best oxymoron I've ever heard.


War is not always wrong or unethical. It's just always regrettable.

To answer your questions:
1. Luckily, not you.

2. Not what the article is concerned with, but still...Hopefully the designers would have the sense to teach it only sensible, practical ethics. Robots that can fantasize about utopia would be useless.

3. Really? People still bring this kind of thing up? What world do you live in? As long as bad people do bad things and we don't all get along, someone is going to have to make difficult decisions about who lives, who dies, and what is an acceptable loss. The kind of BS you posted really minimizes the trauma that THOSE people go through to keep the impact of military operations to a minimum. Do you want to make those decisions? No? Don't have the stomach for it? Put yourself in their shoes before you get self-righteous. Because someone HAS to do it. Imagine the world if they didn't.

If you aren't willing to kill, you'd better be willing to see everything you care about die.


RE: Good Freakin Luck
By Rugar on 2/18/2009 10:28:05 AM , Rating: 2
Hmmm... To respond to your answers:

1. You have no idea about how I would define ethics for an autonomous robot capable of lethal force, so your answer demonstrates my point. You assume that you and I would disagree on ethics because, more often than not you would be right. The beauty of having humans at the controls is that every single one is an individual with their own background to help them determine right from wrong. While you may have occasional atavistic individuals making socially unacceptable choices, they are just individuals. Fielding a force of robots all programmed by one person or group of persons changes the "balance of power" in ethics by slanting decisions one way.

2. Practical ethics? Interesting. I'm fairly sure that ethics are based on absolutes. It is the human based decisions of when we must make decisions for the "greater good" that contravene our ethical training which are important. In short, humans have the capacity to decide when they must do things they are know are "wrong" in order to accomplish a greater good. I'm assuming by your reactions that you are in some way connected to the military. If so, how often have you sat through training sessions where you discuss the difficult decision of when to disobey orders? I've had to sit through QUITE a few of those. The reason we do it is because it is important that troops understand that at some point, the cost in broken ethics is worse than the damage that may be caused by refusing to obey orders.

3. People bring it up quite often. How many times did you hear it during the course of the elections? I heard some version of this A LOT. Your tirade is interesting, mostly because while you seem to think we disagree I think we mostly agree. PEOPLE have to make the decisions of when to contravene their ethics. People who are capable of understanding that their choices have consequences and the moral strength to make those decisions knowing that they will suffer from the guilt for years afterword even though they know they did the right thing.

Warfare is unethical in most of the major societal groups around the world. And yet, we recognize that at times it is necessary to go to war in order to defend some greater ideal. That doesn't change the fact that war is an evil which should be avoided at all costs.

I know, I know. Philosophy on a tech site. And long-winded philosophy at that. Sorry about that...


RE: Good Freakin Luck
By totenkopf on 2/18/2009 9:04:25 PM , Rating: 2
Good luck figuring out the ethics thing. Business ethics aside (which may be a better model for warfare ethics), philosophical ethics are definitely not absolute; though I would love to see the Categorical Imperative in C++. Humans have been trying to wrap their minds around ethics and morality since the beginning of time, there is no one/simple answer, and there is always an exception to every rule (or is there?!).

Frankly, if war is sadistic and horrible now, imagine if we take the last element of humanity out? It might be better in some cases, but in most cases I think it would get ugly.


I'm really not scared of robots...
By JonnyDough on 2/18/2009 3:52:05 AM , Rating: 2
I think the solution is pretty EZ.

1. Put bomb on robot.
2. Robot acts out of line, detonate.

I mean crimony. It's not like the robot can tell that there's a bomb with encryption stuck in it's back.




By MadMan007 on 2/18/2009 11:24:16 AM , Rating: 2
Until the robot takes it off. Or stands really close to you then misbehaves...'Gonna blow me up NOW meatbag?!?'


By SiliconAddict on 2/20/2009 12:35:17 AM , Rating: 2
And if there is one robot in the world for every human all they need to do is build a couple more robots and have them all stand next to people and go boom....instant genocide with a few bots left to rebuild.


By Larrymon2000 on 2/23/2009 1:34:32 AM , Rating: 2
And that's why we don't have fellas like you developing war machines =) Because of a computer system is complicated enough to code its own subroutines and inject them while it's running, then I'm sure it's smart enough to find a way to get the bomb off its back ;p


By Amiga500 on 2/17/2009 9:27:18 AM , Rating: 2
What hope does a relatively limited robot have?

Sensors of sufficient fidelity and compute processing power to absorb that information simply is not at a stage where a complete (enough) picture can be obtained or acted on in realistic time frames.




By AntiM on 2/17/2009 2:47:30 PM , Rating: 2
quote:
If humans often cannot discriminate between friend and foe, combatant and civilian...


Very true. In Desert Storm there were 44 dead, 57 wounded
from friendly fire. Hopefully by the time humans are capable of such technology, we will be advanced and civilized enough so that war will be a thing of the past. Not likely though.


Get a grip
By Gormond on 2/17/2009 9:51:10 AM , Rating: 2
While the sensationalist blog title was enough to make me read this, I must say there was very little new content.

While the robot's programming team may include 100's of programmers there will be a system architect and I'm sure there will be serious amounts of unit and system testing. Yes, they will know what each part of the programming does.

We need to look at the benefits of this which include having our young soldiers not coming back with limbs missing




RE: Get a grip
By Chocobollz on 2/18/2009 8:14:26 AM , Rating: 2
quote:
We need to look at the benefits of this which include having our young soldiers not coming back with limbs missing

LOL Ok, now, what if your young soldier is coming back safely and is smiling at you and suddenly, your war robot is shooting at you and make you all dead (including the young soldier). Which one do you prefer? Only 1 man dead or all mans dead (including you)? ==;


By bfellow on 2/17/2009 10:08:39 AM , Rating: 1
then I think we are already well ahead in artificial intelligence. I for one bow to our Skynet overlords.




By nixoofta on 2/17/2009 12:57:08 PM , Rating: 2
Taking bfellows feelings into consideration,..nix nonchalantly wipes at his own lower lip and quietly gets bfellows attention,..."Pssst,...dude,....you got a li'l robo-poop there..."


ID:ing non-combatants
By JS on 2/17/2009 11:33:58 AM , Rating: 2
Not regarding the question of whether the robots will start killing us all, I am curious as to how they will program the robots to separate between targets and non-targets. With non-targets I am really talking about civilians, I am sure they can make the friendly soldiers wear some kind of radio transmitter or whatever to ID them.

Will the AI know the difference between an enemy soldier and a kid with a stick in his hand? How about if the kid is holding a water pistol? How about the difference between a guy carrying a log on his shoulder and a soldier carrying an RPG?

The examples are obviously endless, and it is difficult enough for human soldiers to make these decisions in combat situations. My vote is for not allowing any AI-controlled fighting machines into areas with civilians until they pass a Turing test.




Thinking
By CalWorthing on 2/17/2009 12:01:23 PM , Rating: 2
More likely, as mentioned in the piece, the perceived exigency has/will create oversight errors. Code injected by less-than-nice miscreants will hide and run. The results will depend on the capacity stored for projecting damage. Quick, clean kills and you fingernails stay clean.




They have a plan
By Donkeyshins on 2/17/2009 1:02:01 PM , Rating: 2
<cue BSG theme>




Bug or feature?
By poundsmack on 2/17/2009 2:20:20 PM , Rating: 2
This is actually an interesting discussion. AI is a lot farther along than most people think. Just because technologies like quantum computing, laser hard drives, nano technology, etc… are not common place in the public sector that doesn’t mean they aren’t out there. Stuff like that is out there and most people don’t know it (example of tech I bet no one else knew existed till I posted this: http://www.atomchip.com/_wsn/page4.html )

I personally don’t want robots doing the fighting for us, if its worth fighting for (diplomacy failed), then its worth dying for. Now I also don’t think that many things are worth dying for, and I would rather people get to a place where they learn “ethical code of war” than teaching a robot. The real task at hand should be teaching better people, not building “smarter” robots. Reason and Logic in people should be improved/encouraged, not just in robots.

Either way, in the end it all comes down to EMP. If we don’t need to communicate with the robots or give orders, and they are completely self contained then it isn’t an issue (unlikely scenario, when was the last time in war that “everything went smoothly” and there was no need to change tactics or come up with a different plan quickly?). BUT, since we will have to send signals and transitions (Patch Tuesday anyone?) to the robots that leaves them susceptible to Electro Magnetic Pulse and or communication jamming. Anything at isn’t fully, and I do mean FULLY shielded (outer shell and inner wiring/circuitry) would be susceptible.

So in the end unless we can make robots sooooooo smart that we don’t EVER need to give them orders none of this matters anyway. Though the good news is, like NASA, these research projects from the government give us a lot of great stuff that trickles down to the consumers. http://www.cnn.com/2007/LIVING/worklife/10/04/nasa... and this is by no means a complete list there are thousands!

So continue your research and theories, but I hope to never see the kind of AI they want implemented in my life time.




By ZachDontScare on 2/17/2009 2:25:28 PM , Rating: 2
As un-PC as this sounds, having robots that can be dropped behind lines and unleash indescriminate hell upon the enemy sounds like and excellent deterrent to other nations who are thinking about entering armed conflict with the US and allies.




"Wired for War" worth a read
By Spacecomber on 2/17/2009 2:28:46 PM , Rating: 2
If you are interested in this general topic of military robotics, Peter Singer's "Wired for War" is worth a read. I'm about half way through it. (http://www.amazon.com/Wired-War-Robotics-Revolutio... ) In the short run, the issue is how much autonomy we give our automated systems, as we tend to trust their intelligence more than our own judgement when seconds count, and the possibility of the machine being wrong, leading to dire consequences. For example, http://en.wikipedia.org/wiki/Iran_Air_Flight_655 .

I think that it is more of a slippery slope set of issues than a black and white picture of humanity versus the machines.

(I see that NPR still has up the podcast of the Fresh Air interview with Singer, here, http://www.npr.org/templates/story/story.php?story... .)




Modifying your own code.
By William Gaatjes on 2/17/2009 4:41:50 PM , Rating: 2
Do some google about the basics. Nasty windows worms rewrite their code all the time to avoid detection by virusscanners. But we still have to write the basic code that does the code modifying. At the moment we do have chips that can reconfigure themselves. Meaning making alternate circuits. And writing code that can evolve, (by making a copy, change it and after a warm reset booting from the new code) is also within our limits. Current AI is further then you think. The trick is with the hardware software combination. But still, writing code that is written to modify itself for a robot that can shoot is insane. Even with humans, soldeirs are drilled for a reason. Now, an explorer, that is another question.




Inevitable
By excelsium on 2/17/2009 8:07:54 PM , Rating: 2
With practically infinite computing power coming within 100 years I think its inevitable that we will be seeing non biological 'life forms'.




Independent Monitoring agents
By Senju on 2/17/2009 11:38:18 PM , Rating: 2
What is needed is independent Monitoring agents that follow the ethics rules and enforces the rules to the other program modules if the job tasks are not following their rule objectives. Look at it as your independent auditor. It can not be involved with the real process. The agents only monitor and align the tasks with the program ethics routine.




I'll be back
By ggordonliddy on 2/18/2009 12:10:25 AM , Rating: 2
That is all.




Friendlys will have a chip
By jahwarrior on 2/19/2009 12:33:37 AM , Rating: 2
The way the robot could identify friend vs foe is through a quick scan of some type, friendlys in the area would have a chip imbedded on their person and be identified, where as the enemy would not and would be killed, see ya.




By noonie on 2/23/2009 3:32:17 PM , Rating: 2
This is a software engineering problem. There is plenty of critical code that runs avionics, cars, power plants and so on. This critical code needs to meet a standard way (we’re talking about a world of difference) beyond Windows or Linux or Word. If it doesn’t, then don’t make excuses and don’t deploy it.




Skynet can do it
By makots on 2/17/09, Rating: -1
RE: Skynet can do it
By mindless1 on 2/17/2009 10:18:10 PM , Rating: 1
Given current performance of high-end SMP workstations it may already be possible. You simply have to let all the processing be done by the CPUs, the video card receives a 2D image to display. Then the video card would only need be fast enough in memory and ramdac for the 2D resolution, perhaps a Geforce2 era card? Problem is, the CPU power to do it, and the programming, will cost more than the video card which has already evolved to meet the goal.


RE: Skynet can do it
By SiliconAddict on 2/20/2009 12:32:26 AM , Rating: 1
Yah know.....I wouldn't mind being jacked in to a Crysis Matrix.


Respect
By HostileEffect on 2/17/09, Rating: -1
"What would I do? I'd shut it down and give the money back to the shareholders." -- Michael Dell, after being asked what to do with Apple Computer in 1997

Related Articles
Dawn of the Drones
June 6, 2008, 6:15 PM
War Robots Still in Iraq
April 17, 2008, 10:20 AM
Can Robots Commit War Crimes?
February 29, 2008, 2:37 PM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki