backtop


Print 20 comment(s) - last by grant2.. on Jul 29 at 6:41 PM

System is designed to be less-intrusive then full-autopilot

The Massachusetts Institute of Technology (MIT) has developed a system called "intelligent co-pilot".  Rather than aiming for fully autonomous artificial intelligence-driven driving, à la Google Inc.'s (GOOGself-driving car project, the new MIT study focuses on a "semi-autonomous" system.

I. Hunting for Safety

According to Sterling Anderson, a PhD student in MIT’s Department of Mechanical Engineering, most current commercially implemented algorithms hunt for static clues in their environment, such as the curb.

This isn't how the human driver functions.  Comments Mr. Anderson, "The problem is, humans don’t think that way.  When you and I drive, [we don’t] choose just one path and obsessively follow it. Typically you and I see a lane or a parking lot, and we say, ‘Here is the field of safe travel, here’s the entire region of the roadway I can use, and I’m not going to worry about remaining on a specific line, as long as I’m safely on the roadway and I avoid collisions."

To mirror that human mind-set the MIT team's algorithm uses so-called "homotopies" -- probable safe zones in the environment.  The environment is triangulated, as the driver is drives to determine if the driver is crossing the border from safety to danger.

When such an event is detected, the car's AI takes over and steers the car around the obstacle, back into a homotopic (safe) zone.



The team has performed 1,200 trials in which they drive normally, but then abruptly head on a collision course with a construction barrel.  Most times the car has been able to avoid the collision.  The few incidents where there was a collision appear to have stemmed from camera failures.

The tests were performed on a course in Saline, Mich., a city in the state's southeast Washtenaw County.

Eaton Corp. intelligent truck technology manager Benjamin Saltsman praises the system for its minimalist approach.  He says that the system uses less computation power and fewer sensors than fully autonomous alternatives from Google and Ford Motor Comp. (F).

Comments Mr. Saltsman, "The implications of [Anderson's] system is it makes it lighter in terms of sensors and computational requirements than what a fully autonomous vehicle would require.  This simplification makes it a lot less costly, and closer in terms of potential implementation."

II. Next -- Using a Smartphone

Mr. Anderson isn't completely satisfied with the system, however, as he fears it could lure beginning drivers to rely on the collision avoidance as a crutch and perform more risky maneuvers.  On the flip side of the coin experienced drivers may be frustrated with the system for overriding dangerous maneuvers.

Still, he's convinced the technology may be eventually fine tuned to be relatively pleasing for the masses and save lives.  

Fresh off a presentation at the Intelligent Vehicles Symposium in Spain, hosted by the Polytechnic School of the University of Alcalá in Madrid, Spain, the team is working to scale down their invention to an even simpler system.

Mr. Anderson and Karl Iagnemma, a principal research scientist in MIT’s Robotic Mobility Group, the other author of the work, will next look to use a dashboard mounted smartphone (which offers a camera, accelerometer, and gyroscope) to perform identical collision detection.
 

The MIT researchers next want to make the system capable of running
on a cell-phone using only its minimal sensors.

 
The ongoing research is funded by grants from the United States Army Research Office and the Defense Advanced Research Projects Agency.

Source: MIT



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By Solandri on 7/17/2012 1:11:07 AM , Rating: 5
What if the computer incorrectly identifies a scenario as safe because the programmer hasn't thought of it (e.g. piano being lifted on crane falls from the sky), and overrides the driver's frantic efforts to steer out of the way, thus insuring the piano hits the car?

Ostensibly the human is in control unless the computer detects a situation which is dangerous. But in reality the computer is always in control since it decides when to override the human, not the other way around. In the 1980s or 1990s, this led to a spate of runway overrun incidents by Airbus planes.

The planes were landing onto rain-slickened runways and refusing to engage the thrust reversers. Obviously a thrust reverser deploying during flight is a catastrophic event, so Airbus programmed the plane to be absolutely certain it was on the ground before deploying them. Unfortunately when landing on wet runways, sometimes the wheels hydroplaned and didn't start spinning. The computer interpreted this as an indication that the plane was still flying and thus refused to deploy reverse thrust. Wheel brakes don't work well when you're hydroplaning, so the planes went off the runway leading to many fatalities. IIRC the wheel gear weight sensor can now override the wheel's spin sensor - if the gear is compressed by the weight of the plane, the computer says it's on the ground even if the wheels aren't spinning.

But you can see the problem here - a scenario not envisioned by the programmer leads the computer to misinterpret a situation and prevent the human from taking corrective action. I think I prefer Boeing's approach to this. On Boeing planes, the computer can override the human, but if the human really pushes the controls against the computer's wishes, it relinquishes control back to the human. Boeing can do this because their yoke and thrust levers provide force feedback (you have to exert more force to move the controls against the computer or aerodynamics). The Airbus controls provide no force feedback - they're basically just position sensors for the pilots to send input to the computer, not the other way around.

So the way I'd envision the piano scenario playing out is the driver sees the piano and swerves the car to avoid it. The computer interprets the swerve as a dangerous maneuver and causes the car to continue straight. The driver feels this as the steering wheel resisting his attempt to turn it. But if he continues and puts more muscle into it to force the steering wheel to turn, the computer interprets this as an override and gives control of the car back to the driver.

(Then some patent troll will come along and patent this idea which is already in use on planes, because it's totally brand new and innovative when used on cars. *rolls eyes*)




"Can anyone tell me what MobileMe is supposed to do?... So why the f*** doesn't it do that?" -- Steve Jobs














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki