backtop


Print 37 comment(s) - last by rampholeon.. on Jan 4 at 2:13 PM


Soldier launching UAV  (Source: Sgt. 1st Class Michael Guillory, U.S. Army)
Robots with ethics could one day be used on the battlefield

The United States military continues to invest heavily into robotic technology, as the newer generations of robot-based soldiers will be programmed to understand battlefield ethics.

According to an article in the Army Times, the so-called 'ethical robots' would follow international laws.  Ronald Arkin, from the Mobile Robot Laboratory at the Georgia Institute of Technology, wrote a book to discuss the future of robotics.

In "Governing Lethal Behavior in Autonomous Robots," Arkin claims robotics, if programmed correctly, have numerous advantages over human ground troops.  Robots are emotionless , expendable, and can be customized for specific missions.  

It's possible the robots could be taught remorse, compassion and guilt, but exact senses the robots would be programmed with are still unknown.  Furthermore, depending on the determined level of guilt, and the mission being carried out, the firepower and effectiveness of weapons used will change.

The robots could also be used to monitor soldiers to ensure international treaties are being followed by U.S. and coalition ground troops.  Although many soldiers don't want to be monitored in such an intrusive manner, several high-profile cases of abuse and murder have further blemished the military's image among locals in Iraq.  

If funding is properly allocated for the research, it could be available in 10 to 20 years.  As the U.S. continues to fight wars using enhanced technology, unmanned aerial vehicles (UAVs) and other unmanned resources have become popular alternatives to launching manned missions -- and is expected to further increase in the future.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Are They Hacker Proof?
By KakarotUSMC on 12/29/2009 9:16:31 PM , Rating: 1
So if $30 off-the-shelf software can be used to hack into the video feed of U.S. multi-million dollar UAVs, why should we be confident in a robotic soldier?

And how are we going to pay for it? Maintenance costs included, what is the bottom line TCO on these?

As a leader, I am not thrilled at the prospect of risking the lives of my Marines in combat on the promise that robotic soldiers can safely share the battlefield.




RE: Are They Hacker Proof?
By Noliving on 12/29/2009 11:27:30 PM , Rating: 3
Because the video feed from the drones was unencrypted, which means there isn't any hacking really what so ever, all you need is a receiver. It's the equivalent of over the air tv or over the air radio in either FM or AM.


RE: Are They Hacker Proof?
By mi1400 on 12/30/09, Rating: 0
RE: Are They Hacker Proof?
By mi1400 on 12/30/2009 4:39:43 AM , Rating: 2
And by the way it was encrypted. the goofy technique to achieve encryption was to add a glitch which they thought no one could discover.
"the bigger problem is that Pentagon officials have known about this flaw since the 1990s, but they didn't think insurgents would figure out how to exploit it"


RE: Are They Hacker Proof?
By seamonkey79 on 12/30/2009 9:02:53 AM , Rating: 5
Well, if they were smart, they wouldn't be fighting us, right?


RE: Are They Hacker Proof?
By PAPutzback on 12/30/09, Rating: 0
RE: Are They Hacker Proof?
By seamonkey79 on 12/30/2009 10:59:30 PM , Rating: 2
Silly self, I forgot the <sarcasm></sarcasm> tags that seem to be required.


RE: Are They Hacker Proof?
By knutjb on 12/30/2009 12:14:37 PM , Rating: 3
quote:
"the bigger problem is that Pentagon officials have known about this flaw since the 1990s, but they didn't think insurgents would figure out how to exploit it"

Put your statement into context. What form of video encryption in the 1990s was cost effective and practical for field deployment? The 486 was a hot CPU at the time and who here is running one today? Was it overlooked, yeah, the threat was perceived as low.

Today there are more effective ways to encrypt because technology is available to make it happen. The program managers need to rethink their priorities as well as their supervision overseeing them. Time to move some new people in. That said no system is perfect and all encryption systems are prone to cracking. To you "arm chair quarterbacks" the military's biggest problem is politicians who don't understand the cost of losing vs the cost of providing the best equipment.

My proof: for the Kosovo war Bill Clinton funded that conflict from the training budgets of the units who participated. Once the main battle was over the additional materials consumed, i.e. bombs, parts, bullets, etc..., were not replaced and many who required training went non-current and were downgraded because they had no money to train.

Finally, politicians in power today confuse the military for a police force. The two are not the same and it is very dangerous to treat the military as such. Their missions are completely different as are their modes of operation. The incorrect application of the US Constitution as a world document in place of the Geneva Convention will cost lives, wait and see.

War has no prize for second place.


RE: Are They Hacker Proof?
By Lerianis on 12/31/2009 12:50:48 AM , Rating: 2
War has no prize, period.... the only people who push for war (save if they are attacked first) are usually armchair bullies who wish to force the rest of the world into doing what they wish them to do.


RE: Are They Hacker Proof?
By Vertigo101 on 12/30/2009 6:37:24 PM , Rating: 2
Rover, the video feed that they're 'hacking'(what a joke to even say that), is being transmitted in the clear, even as I type this. Its primary use is to give troops on the ground access to the UAV feed that is covering their op.

There is a digital variant of Rover, but the equipment isn't as widely deployed, and not everyone who has it knows how to make it work, so often analog Rover is still the fallback.

They also have a version that's more specific to the One System, a horrible POS, that is also slightly more secure.
http://defense-update.com/products/r/rover.htm


RE: Are They Hacker Proof?
By Shadowself on 12/30/2009 9:54:43 AM , Rating: 2
First, many of the fielded systems are as much as 15 years old. UAVs have been in use much longer than most people think. The older systems do, indeed, have unencrypted links.

Second, there has been a strong push for the past two to three years to update the systems to support encrypted links. It takes time to retrofit all the fielded systems. It's NOT just the airborne platforms (which number in the many hundreds) but also all the ground systems (which number at well over 10,000). Encrypting the link from the airborne (transmission) side is useless if the ground teams don't have the ability to decrypt it.

Third, there are the coalition forces that have to receive the data too. So you have to get them to upgrade their systems. This takes additional time -- either that or the U.S. expends the personnel and money to have their own people and hardware/software included with every small unit of any coalition team.

Fourth, for most platforms the video stream that is the topic of this discussion is never on for the full mission. For most platforms the imagery stream used to fly the systems is not the same as the imagery stream for identifying targets. In many systems an imagery stream is not necessary at all in order to fly the system. Thus the imagery stream is not going to point out where our forces are at the landing site (and in many cases the takeoff/landing site is not even in the same country as the operational zone).

Finally, as you are probably aware, there are many different levels of encryption -- some open to the public (e.g., Blowfish, DES, 3DES, AES) and others kept close by such agencies as the DOD, CIA and NSA, which are not available to the public. Which of these codes do you wan to use? Do you want to give the closely held encryption/decryption technologies to all coalition forces/governments? Or do you want to just give the "public" ones -- knowing that virtually all the "public" ones will eventually be hacked and useless thus again exposing the troops just as if they were never encrypted in the first place (or worse since we may not be aware they've been hacked).

When dealing with encryption/decryption on this scale it is never as easy as clicking the on screen button in WinZip or PowerArchiver. I've just scratched the surface of all the issues here.


RE: Are They Hacker Proof?
By Moishe on 12/30/2009 9:08:02 AM , Rating: 4
I think the easier thing to do would be to simply not play by the rules.

A good example of this is the movie "The Patriot" starring Mel Gibson. In a battlefield full of rules, the little guy has only one choice if he wants to win... That choice is guerrilla warfare.

So while our robots fight with a smile, refuse to win, etc... the enemy will be pulling the same stunts as always. Hiding behind civilians, beheading, torture, etc.

Can't say as I blame them. The choice of hobbling ourselves in order to gain "respect" instead of victory is our choice, not theirs.


RE: Are They Hacker Proof?
By Lerianis on 12/31/2009 12:56:08 AM , Rating: 3
There comes a time where you have to realize that you can have a military 'victory' but still also lose the fight because you piss off the people in the country which you are fighting in and turn them against you, thereby turning them into the next generation of 'terrorists' (which is a made-up term for politically, historically or religiously motivated mass murderers).

That is the problem we are having with Afghanistan right now. Everyone wants to 'destroy the Taliban' without realizing that right now, we are just killing the family members of a lot of people in Afghanistan (and Iraq) who were, at most, NEUTRAL towards us in almost all cases, and turning them totally against us, thereby creating our own worst nightmare.

What the United States should have done was gone into Afghanistan and SMASHED the Taliban, then LEFT Afghanistan to it's own people with a stern admission to keep the religioloonies out of power or "We'll be back!" to quote Arnold, and NOT gone into Iraq AT ALL!

Or, at least, not without the full support and backing of every other Middle Eastern state, save Iran and EVEN IRAN if we could have swung it.


Oh #$*$!
By Frallan on 12/30/2009 7:19:45 AM , Rating: 2
As long as the robots are not true robots but demand a human decision when to pull the trigger.

Asimov had the right idea regarding robots. The three robotic laws should adhered to or we will have troubles in the future with SW/HW glitches that activates the wrong respons from robots at the wrong time in the wrong place.

Inspite of the tone in my post I wish you all a Happy new year.

FL




RE: Oh #$*$!
By TSS on 12/30/2009 8:10:36 AM , Rating: 2
I belive I, Robot showed that even if you follow the 3 laws, it's still possible to enslave mankind.

This article is very interesting though, consider this: It's about a robot with Ethics, and robots are emotionless.

Aren't ethics based on emotions?

They certainly aren't based on logic. For instance, there are so many ways to brutally murder or maim somebody using a bullit in a opera of pain. Yet, it is more ethical to shoot somebody then to use white phosphor. Logic can not explain that since in both cases, people end up dead (and efficiently so) and logic tells me that's what went wrong. That they died, not how they died. If they where supposed to die because they where your enemy in a war, then what's the problem?

Now, how can robots decide what's right and what's wrong, ethics, without the tools to make such decisions, emotions? We can't tell them what to do in every possible situation.

Here's what'll happen: logic dictates it's ethical to kill any non-ethical creatures, as they show no respect for life thus are a threat to life thus do not deserve life. If a human's standard of ethics is lower then the robot's standard of ethics, the human deserves to die as it's a threat to society and ethics in general. The preverbial "if you could kill hitler before WWII would you?".

But not to worry. With current robot technology as it is, it'll be decades before the first decent attempt at an AI will be made. Not that i don't belive in the advances of science, but it's simply because people in the '50s also thought we'd have flying cars and space hotels and a lunar colony by now. I'm not getting my hopes up :P

Happy new year and a good decade to you too sir ^^


RE: Oh #$*$!
By Moishe on 12/30/2009 9:56:13 AM , Rating: 2
You're right... getting a robot that doesn't rely on a human controller is very far away.


RE: Oh #$*$!
By DEVGRU on 12/30/09, Rating: 0
RE: Oh #$*$!
By Don Tonino on 12/30/2009 12:38:45 PM , Rating: 2
quote:
Aren't ethics based on emotions?


I'd say that a succesfull ethics MUST not be based on emotions.


RE: Oh #$*$!
By Shadowself on 12/30/2009 1:12:43 PM , Rating: 1
I, Robot, the movie, was virtually nothing like the collection of short stories by Azimov. The stories grapple with the implications of each law including one about a "robot" that gets the ability to be ethical and emotional and kills itself because of internal conflicts amongst the three laws.

To eliminate the problem posed in the recent movie of the same name it has been long known that there needs to be a "Zeroth Law": "A robot shall not, through its action or inaction, interfere with a human being's control over that same human being's actions with regard to that same human being."

Sometimes it has been called the the "self determination" law, i.e., a robot cannot interfere with a person's right of self determination. It can interfere to save the life/health/etc. when one person tries to harm another, but cannot interfere when a person tries to harm himself or do anything else the robot might consider adverse to that human.

The original laws are then modified to say,

"A robot may not injure a human being or, through inaction, allow a human being to come to harm, except where such action or inaction would conflict with the Zeroth Law..
"A robot must obey any orders given to it by human beings, except where such orders would conflict with the Zeroth or First Law.
"A robot must protect its own existence as long as such protection does not conflict with the Zeroth, First or Second Law."

A clear example is, as the original three laws stand a robot must do everything in its power to keep a person from committing suicide. The addition of the Zeroth Law would allow a person to commit suicide and the robot do nothing to stop it. It's simple self determination.

Personally, I strongly support implementing all four laws in any autonomous robotic system. I see no reason to not implement the four laws and implement instead an "ethics engine" of some sort. Sticking with the four laws would obviate any possibility of a robot killing because it "thought" it had a higher level of ethics than you did.


RE: Oh #$*$!
By Fritzr on 12/30/2009 4:06:12 PM , Rating: 2
Asimov did add a Zeroeth Law in his later books where he merged the Foundation universe with his Robots universe. In the trilogy telling the story of the creation of the Foundations, Robots ARE allowed to kill individuals in order to prevent harm to humanity as a whole

Isaac Asimov's Zeroth Law of Robotics: A robot may not harm humanity, or through inaction allow humanity to come to harm.

In a wartime situation this law would be extemely dangerous for the command structure as the robot could decide that a successful mission would cause greater harm to humanity and that assinating its own chain of command would be the greater good. This theme is what the movie I-Robot explored by having the robots decide that preventing humans from harming themselves or others justified occasional harm to a few.
quote:
The thought came to us from Caiaphas, the High Priest mentioned in the Gospel of John. In John 11:49-50 the Apostle John wrote, "And one of them, named Caiaphas, being the high priest that same year, said unto them, Ye know nothing at all, Nor consider that it is expedient for us, that one man should die for the people, and that the whole nation perish not."
--http://wiki.answers.com/Q/What_philosopher_said_th...


RE: Oh #$*$!
By Lerianis on 12/31/2009 1:10:56 AM , Rating: 2
Fact is that most times, the robot would be right.... VIKI in the film I, Robot was actually quite SANE when she said that 'the ends justify the means'..... that's the problem.

Some very sane people (hyper-sane as some call them) come to the conclusion that they have the right to dictate other people's lives...... which is a problem that we have with the religioloonies using 'god' to justify that and the socialists using 'the good of mankind' to justify that.


RE: Oh #$*$!
By Lerianis on 12/31/2009 1:00:18 AM , Rating: 2
Actually, ethics is NOT based on emotions.

Ethics is based on simple rules that humans make complicated with their emotions.

1. Do not kill unless it is your last choice between you and death from another person killing you or someone else directly.
2. Directly means exactly that: pointing a gun at you and getting ready to pull the trigger or waving a knife at you/trying to stab you, or something equivalent in immediacy to that.

If everyone would adhere to those two rules, and stop bringing their 'god' myth into this and realize that we SHOULD NOT KILL OTHER HUMANS PERIOD (not even by 'executions', which only demean the society who does them).... things would be a lot better.


RE: Oh #$*$!
By nafhan on 12/30/2009 10:03:20 AM , Rating: 3
I'm of the opinion that the "Three Laws of Robotics" are a little too vague and simplistic to be used for anything more than a programming guideline. Go ahead and define a human, how to identify one, and - most importantly - what "injuring" or harming a human (especially through inaction) means.
Anyway, the three laws will never be implemented even if they could. There will be at least one country that won't follow the laws with their robots, and if someone does it then everyone who doesn't want to be at a huge disadvantage will. Best we can hope for is robots fighting robots! :)


RE: Oh #$*$!
By Fritzr on 12/30/2009 4:11:26 PM , Rating: 2
Most of Asimov's Robot stories dealt with the gray areas where the the robot was damned if it followed the law & damned if it failed to follow the law.

One of the detective stories featured a catatonic robot...its arm was the murder weapon. For a basic grounding in what can go wrong, check out Asimov's Robot books. If your library doesn't have them, they can get them on interlibrary loan :)


RE: Oh #$*$!
By Lerianis on 12/31/2009 1:02:36 AM , Rating: 2
They are good STORIES, but the fact is that in a case where the Robot was 'damed if they do, damned if they don't'.... the robot could be programmed to just simply SHUT DOWN for good or do nothing.


Sounds like a great idea to me
By Rakanishu on 12/29/2009 9:22:47 PM , Rating: 5
Honestly, what could go wrong?

http://www.youtube.com/watch?v=mrXfh4hENKs




They already have them!
By jonmcc33 on 12/30/2009 12:49:43 AM , Rating: 2
They are the ones that already serve and don't know the difference between defending your country and being a drone for your country's government.




RE: They already have them!
By WinstonSmith on 12/30/2009 10:46:09 AM , Rating: 3
Agreed! This technology when perfected would save American lives on the battlefield but when the other side starts using robots, too, the whole war thing is exposed as rather ridiculous with robot planes and soldiers fighting each other to destruction. Might as well play some first person combat simulation between opposing sides. But that wouldn't fuel our Military Industrial Congressional Complex (which is what Eisenhower originally wanted to call it).


By atlmann10 on 12/30/2009 1:45:31 AM , Rating: 2
This is where were headed for sure. One thing I don't think the general public at least will understand on this is the facts. They will think of terminators or I-robots flipping out. While this could in general I guess come to fruition, these specific machines will be programmed with direct responses to any situation. However; as the Marine points out, if an air surveillance/reconnaissance vehicle can be hacked with 30 dollar software, where would that leave something like this? His point of having something like this active in the field for whatever purpose with weaponry of any kind attached would be unnerving at the least. So therefore they have to make something that can make it's own decision at least to some degree, and that's where it becomes frightening (at least in the Terminator/I-robot sense of the issue) really.




By rampholeon on 1/4/2010 1:15:50 PM , Rating: 2
Well, the first thing I did upon finding this news, was to search the text of comments for "terminator". Ethical autonomous killing machines, shudder. Better find an ecological niche for them quickly. Maybe they could police predation of penguins by polar bears in the antarctic ?


By Lerianis on 12/31/2009 12:48:59 AM , Rating: 4
You will see an explosion in the amount of warmongering people in this country because they will no longer have to worry about their 'loved ones' being sent into harms way, thereby giving them pause before starting a 'war' with another country.

I hate to put it that way, but that is honestly the future that I see if robotic soldiers ever do come into being.




By rampholeon on 1/4/2010 1:45:41 PM , Rating: 2
Interestingly enough, the french form of cybernetics , that is cybernétique , puns as s'y berne éthique , which means ethics fools itself . After 9/11, out of denial, first we had imaginary WMDs - the weapons the kamikazes should have used to create the destruction they achieved - and now we will have... well, I guess what I mean to say, is that I find most fearsome the idea that some people may now well design and program "ethical warrior robots" without doubting a split second that whatever they'll achieve, it will be ethically superior to the kamikazes... to which they are a response.




By rampholeon on 1/4/2010 2:13:00 PM , Rating: 2
In addition, I find the heading of the topic as "science" relevant to the matter of ethics. Did not Hiroshima and Nagasaki open the road to a long-term confusion of war ethics and experimental science ethics. To the benefit of the creativity of the military-industrial complex, making US military innovation shine as science demos ?


By ryan023 on 12/29/2009 10:47:20 PM , Rating: 2
Hard to put down and necessary to know -- Arkin's book provides a comprehensive intellectual history of robots and a thorough compilation of robotic organizational paradigms from reflexes through social interaction." -- Chris Brown, Professor of Computer Science, University of Rochester

This introduction to the principles, design, and practice of intelligent behavior-based autonomous robotic systems is the first true survey of this robotics field. The author presents the tools and techniques central to the development of this class of systems in a clear and thorough manner. Following a discussion of the relevant biological and psychological models of behavior, he covers the use of knowledge and learning in autonomous robots, behavior-based and hybrid robot architectures, modular perception, robot colonies, and future trends in robot intelligence.

The text throughout refers to actual implemented robots and includes many pictures and descriptions of hardware, making it clear that these are not abstract simulations, but real machines capable of perception, cognition, and action.

http://rapidshare.com/files/197529653/Ronald_C._Ar...




lol
By sirius4k on 12/30/2009 9:26:43 AM , Rating: 2
Wow... did you see how they put 'ethics' and 'robots' in one sentence ? =D




Ethics, Smethics
By ZachDontScare on 12/30/2009 3:10:14 PM , Rating: 2
Giving a killer robot ethics is a good way to get yourself kicked out of the mad-scientist club.




good yuo
By alqaly on 12/30/09, Rating: -1
"This is about the Internet.  Everything on the Internet is encrypted. This is not a BlackBerry-only issue. If they can't deal with the Internet, they should shut it off." -- RIM co-CEO Michael Lazaridis

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki