Print 30 comment(s) - last by ShaolinSoccer.. on Nov 24 at 11:27 AM

  (Source: TriStar Pictures)
Humanitarian group predicts war crimes and worse if robot AIs are trained to target and kill humans

Thus far, no nation has produced a fully autonomous robotic soldier.

I. Human Rights Watch Warns of Robotic War Crimes

However, many observers fear we are creeping towards an era in which automated killing machines are a staple of the battlefield.  The U.S. and other nations have been actively been developing landair, and sea unmanned vehicles.  Most of these machines are imbued with some degree of artificial intelligence and operate in a semi-autonomous fashion.  However, they currently have a human operator in the loop, (mostly) in control.

But experts fear that within 20 to 30 years artificial intelligence and military automation will have advanced to the point where nations consider deploying fully automated war robots to kill their enemies.

International humanitarian group and war-crimes watchdog Human Rights Watch has published a 50-page report entitled "Losing Humanity: The Case Against Killer Robots", which calls on world governments to install a global ban on autonomous killing robots, similar to current prohibitions on the use of chemical warfare agents.

Current generation war robots, like the MAARS robot, have a human operator in the loop.
[Image Source: Wired]

Comments Steve Goose, Arms Division director at Human Rights Watch, "Giving machines the power to decide who lives and dies on the battlefield would take technology too far.  Human control of robotic warfare is essential to minimizing civilian deaths and injuries.  It is essential to stop the development of killer robots before they show up in national arsenal.  As countries become more invested in this technology, it will become harder to persuade them to give it up."

II. Ban the 'Bots

The proposal, co-endorsed by the Harvard Law School International Human Rights Clinic, also calls on a prohibition on development, production, and testing of fully autonomous war robots.

The groups address the counter-argument -- that robotic warfare saves the lives of soldiers -- arguing that it makes war too convenient.  They argue that an "autocrat" could turn cold, compassionless robots on killing their own civilian population.  It would be much harder to convince humans to do that.

Countries could also claim their cyber-soldiers "malfunctioned" to try to get themselves off the hook for war crimes against other nations' civilians.

And of course science fiction fans will recognize the final concern -- that their could be legitimate bugs in the AI which cause the robots to either not properly calculate a proportional response to violence, to not distinguish between civilian or soldier, or -- worst of all "go Terminator" and turn on their fleshy masters.

Comments Mr. Goose, "Action is needed now, before killer robots cross the line from science fiction to feasibility."

Sources: Human Rights Watch [1], [2]

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

No fully congintive systems at all
By Shadowself on 11/20/2012 9:17:45 PM , Rating: 2
Most of these machines are imbued with some degree of artificial intelligence and operate in a semi-autonomous fashion. However, they currently have a human operator in the loop, (mostly) in control.
As someone who has recently worked on "cognitive agents" for several of the DoD's unmanned systems, I can say unequivocally that these statements are hogwash. Pure and simple.

The level of autonomy of the most sophisticated DoD systems? They route themselves from point A to point B and can autonomously choose their communications channels based upon the operating environment. Or other actions as equally mundane. None of them have any -- and I do mean *any* -- level of control over any operational equipment including, and especially, weaponry.

There is no human "mostly" in control. There is a human in the loop of all the more sophisticated systems. Most of them have a person at the controls of every function above the most rudimentary (thermal controls so the system does not overheat, power controls so batteries don't get drained by non essential equipment, etc.) Even for things like the autonomous routing systems there is a way to take absolute control away from the unmanned vehicle almost instantly (if you're doing this over a satellite link it can take a quarter of a second or more to take control of these simple operations).

RE: No fully congintive systems at all
By tayb on 11/20/2012 10:36:09 PM , Rating: 2
There are sentry guns already in existence that can detect an object and shoot it down with no human interaction. Right now a human has to give the command but that isn't a required barrier, it's there for safety. There was an article on this site a few days ago talking about Iron Dome. No human intervention there. The machine sees a rocket, determines the trajectory, and either let's it fall or attempts to shoot it down. It's not iRobot or Terminator advanced but it's definitely a precursor.

When people talk about autonomous weapons everyone always thinks of AI or super advanced robots. They don't have to be super advanced. The technology already exists and is deployed. You could even make the case that land mines lack human intervention and therefore would fall under this umbrella.

The problem here is that true AI machines are inevitable. An AAV would be able to easily outmaneuver even the most skilled pilot simply due physical limitations of the human body. Any country that isn't deploying AAVs with advanced intelligence will essentially cede the skies to foreign powers. I'd say we are a few decades away from this reality but it's probably not as far out as people would like to believe.

By Shadowself on 11/21/2012 1:03:20 PM , Rating: 2
By your description all of the defensive systems in the opening sequences of the movie Raiders of the Lost Ark fit within the automated weapon category. If you can make the case for land mines why not those systems?

Simple purely reactive systems (even the Iron Dome system or Phalanx system) are not addressing the primary issue. The issue is cognition and what level of cognition "crosses the line"?

When does a single purpose system that has zero level of "choice" versus one that can actively sense its environment and choose which inputs to ignore, which inputs to evaluate, which responses to make from those evaluations, and finally decide what are the allowable limits of response cross the line? It's the last two of those cognition steps that cause heartburn in most people in the field.

"When an individual makes a copy of a song for himself, I suppose we can say he stole a song." -- Sony BMG attorney Jennifer Pariser

Most Popular Articles5 Cases for iPhone 7 and 7 iPhone Plus
September 18, 2016, 10:08 AM
Automaker Porsche may expand range of Panamera Coupe design.
September 18, 2016, 11:00 AM
Walmart may get "Robot Shopping Carts?"
September 17, 2016, 6:01 AM
No More Turtlenecks - Try Snakables
September 19, 2016, 7:44 AM
ADHD Diagnosis and Treatment in Children: Problem or Paranoia?
September 19, 2016, 5:30 AM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki