MIT Creates Car That Avoids Crashes, But Doesn't Otherwise Interfere
July 16, 2012 4:14 PM
comment(s) - last by
System is designed to be less-intrusive then full-autopilot
Massachusetts Institute of Technology
(MIT) has developed a system called "intelligent co-pilot". Rather than aiming for fully autonomous artificial intelligence-driven driving, à la Google Inc.'s (
self-driving car project
, the new MIT study focuses on a "semi-autonomous" system.
I. Hunting for Safety
According to Sterling Anderson, a PhD student in MIT’s
Department of Mechanical Engineering
, most current commercially implemented algorithms hunt for static clues in their environment, such as the curb.
This isn't how the human driver functions.
Mr. Anderson, "The problem is, humans don’t think that way. When you and I drive, [we don’t] choose just one path and obsessively follow it. Typically you and I see a lane or a parking lot, and we say, ‘Here is the field of safe travel, here’s the entire region of the roadway I can use, and I’m not going to worry about remaining on a specific line, as long as I’m safely on the roadway and I avoid collisions."
To mirror that human mind-set the MIT team's algorithm uses so-called "homotopies" -- probable safe zones in the environment. The environment is triangulated, as the driver is drives to determine if the driver is crossing the border from safety to danger.
When such an event is detected, the car's AI takes over and steers the car around the obstacle, back into a homotopic (safe) zone.
The team has performed 1,200 trials in which they drive normally, but then abruptly head on a collision course with a construction barrel. Most times the car has been able to avoid the collision. The few incidents where there was a collision appear to have stemmed from camera failures.
The tests were performed on a course in Saline, Mich., a city in the state's southeast Washtenaw County.
Eaton Corp. intelligent truck technology manager Benjamin Saltsman praises the system for its minimalist approach. He says that the system uses less computation power and fewer sensors than fully autonomous alternatives from Google
Motor Comp. (
Comments Mr. Saltsman, "The implications of [Anderson's] system is it makes it lighter in terms of sensors and computational requirements than what a fully autonomous vehicle would require. This simplification makes it a lot less costly, and closer in terms of potential implementation."
II. Next -- Using a Smartphone
Mr. Anderson isn't completely satisfied with the system, however, as he fears it could lure beginning drivers to rely on the collision avoidance as a crutch and perform more risky maneuvers. On the flip side of the coin experienced drivers may be frustrated with the system for overriding dangerous maneuvers.
Still, he's convinced the technology may be eventually fine tuned to be relatively pleasing for the masses and save lives.
Fresh off a presentation at the
Intelligent Vehicles Symposium
in Spain, hosted by the Polytechnic School of the
University of Alcalá
in Madrid, Spain, the team is working to scale down their invention to an even simpler system.
Mr. Anderson and Karl Iagnemma, a principal research scientist in MIT’s
Robotic Mobility Group
, the other author of the work, will next look to use a dashboard mounted smartphone (which offers a camera, accelerometer, and gyroscope) to perform identical collision detection.
The MIT researchers next want to make the system capable of running
on a cell-phone using only its minimal sensors.
The ongoing research is funded by grants from the
United States Army Research Office
Defense Advanced Research Projects Agency
This article is over a month old, voting and posting comments is disabled
I can already think of a potential problem
7/17/2012 1:11:07 AM
What if the computer incorrectly identifies a scenario as safe because the programmer hasn't thought of it (e.g. piano being lifted on crane falls from the sky), and overrides the driver's frantic efforts to steer out of the way, thus insuring the piano hits the car?
Ostensibly the human is in control unless the computer detects a situation which is dangerous. But in reality the computer is always in control since it decides when to override the human, not the other way around. In the 1980s or 1990s, this led to a spate of runway overrun incidents by Airbus planes.
The planes were landing onto rain-slickened runways and refusing to engage the thrust reversers. Obviously a thrust reverser deploying during flight is a catastrophic event, so Airbus programmed the plane to be absolutely certain it was on the ground before deploying them. Unfortunately when landing on wet runways, sometimes the wheels hydroplaned and didn't start spinning. The computer interpreted this as an indication that the plane was still flying and thus refused to deploy reverse thrust. Wheel brakes don't work well when you're hydroplaning, so the planes went off the runway leading to many fatalities. IIRC the wheel gear weight sensor can now override the wheel's spin sensor - if the gear is compressed by the weight of the plane, the computer says it's on the ground even if the wheels aren't spinning.
But you can see the problem here - a scenario not envisioned by the programmer leads the computer to misinterpret a situation and prevent the human from taking corrective action. I think I prefer Boeing's approach to this. On Boeing planes, the computer can override the human, but if the human
pushes the controls against the computer's wishes, it relinquishes control back to the human. Boeing can do this because their yoke and thrust levers provide force feedback (you have to exert more force to move the controls against the computer or aerodynamics). The Airbus controls provide no force feedback - they're basically just position sensors for the pilots to send input to the computer, not the other way around.
So the way I'd envision the piano scenario playing out is the driver sees the piano and swerves the car to avoid it. The computer interprets the swerve as a dangerous maneuver and causes the car to continue straight. The driver feels this as the steering wheel resisting his attempt to turn it. But if he continues and puts more muscle into it to force the steering wheel to turn, the computer interprets this as an override and gives control of the car back to the driver.
(Then some patent troll will come along and patent this idea which is already in use on planes, because it's totally brand new and innovative when used on cars. *rolls eyes*)
"If you look at the last five years, if you look at what major innovations have occurred in computing technology, every single one of them came from AMD. Not a single innovation came from Intel." -- AMD CEO Hector Ruiz in 2007
Ford Adds Perpendicular Park Assist, Traffic Jam Assist to Its Portfolio
June 26, 2012, 7:15 PM
Google Developing Self-Piloted Cars
October 11, 2010, 7:22 AM
Ford, Toyota, and Universal Pictures Celebrate "Back to the Future Day' in Style
October 21, 2015, 4:19 PM
Consumer Reports Flexes Muscle, Hits Slumping Tesla Motors Stock
October 20, 2015, 4:13 PM
Debunked: Beneath the Lies, Nigerian "Pee Generator" Is Still Pissing Into the Wind
October 19, 2015, 7:53 PM
Hot Air? President Obama, G7 Pledge to Eliminate Most Fossil Fuel Use by 2100
June 8, 2015, 5:40 PM
Study Predicts Self-Driving Vehicles Could Rake in Billions
March 6, 2015, 8:34 AM
Dual-Motor Tesla Model S P85D's "Insane Mode" Shocks Passengers
January 28, 2015, 11:18 PM
Latest Blog Posts
Sceptre Airs 27", 120 Hz. 1080p Monitor/HDTV w/ 5 ms Response Time for $220
Dec 3, 2014, 10:32 PM
Costco Gives Employees Thanksgiving Off; Wal-Mart Leads "Black Thursday" Charge
Oct 29, 2014, 9:57 PM
"Bear Selfies" Fad Could Turn Deadly, Warn Nevada Wildlife Officials
Oct 28, 2014, 12:00 PM
The Surface Mini That Was Never Released Gets "Hands On" Treatment
Sep 26, 2014, 8:22 AM
ISIS Imposes Ban on Teaching Evolution in Iraq
Sep 17, 2014, 5:22 PM
More Blog Posts
Copyright 2016 DailyTech LLC. -
Terms, Conditions & Privacy Information