backtop


Print 15 comment(s) - last by CZroe.. on May 8 at 7:11 PM

It may be deployed in future Tesla vehicles

Tesla CEO Elon Musk is interested in self-driving technology for his future fleets -- and he's going to Google for advice.

Musk has been talking to Google about its work with self-driving cars and how to implement such a system for future Tesla vehicles. However, Musk doesn't want to call it "self-driving," but rather "autopilot" technology. 

“I like the word autopilot more than I like the word self- driving,” Musk said. “Self-driving sounds like it’s going to do something you don’t want it to do. Autopilot is a good thing to have in planes, and we should have it in cars.”

The idea behind such technology is to not only make driving more convenient, but also safer. Cars equipped with self-driving systems can react to certain situations and prevent a crash, for example. 

Google is the place to go for insight on the new technology, considering Google has launched self-driving projects over the last couple of years. Its test fleet consists of Toyota Prius', Audi TTs and Lexus RX450hs equipped with the self-driving technology.


Google's self-driving cars use LIDAR, which is a rotating sensor on the roof that scans more than 200 feet in all directions for a map of the car's surroundings; a position estimator sensor that helps locate its location on a map; four radar sensors to identify the position of distant objects, and a video camera to detect traffic lights as well as moving objects like pedestrians.

“The problem with Google’s current approach is that the sensor system is too expensive,” Musk said. “It’s better to have an optical system, basically cameras with software that is able to figure out what’s going on just by looking at things.” 

As of right now, Google's self-driving cars are allowed to drive on Nevada, California and Florida roads for testing. Last month, it was reported that Michigan may approve autonomous vehicle licensing soon. 

Musk sees autonomous driving -- or autopilot -- as the next step in the evolution of vehicles, but isn't working on development quite yet. 

“We’re not focused on autopilot right now; we will be in the future,” Musk said. “Autopilot is not as important as accelerating the transition to electric cars, or to sustainable transport.”

Source: Bloomberg



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Easy for you to say.
By Ammohunt on 5/7/2013 5:23:41 PM , Rating: 1
quote:
The problem with Google’s current approach is that the sensor system is too expensive,” Musk said. “It’s better to have an optical system, basically cameras with software that is able to figure out what’s going on just by looking at things.


What kind of processing power are we talking about to make accurate optical comparisons associated with a vehicle moving at say 75 miles an hour for navigation purposes?




RE: Easy for you to say.
By daboom06 on 5/7/2013 5:45:20 PM , Rating: 2
i dont think switching from acoustic to optical measurements will necessarily increase the computational load. i'm pretty sure the output of all the sensors in the current google cars are 'images' in a loose sense of the word. you can make a picture from a radar scan, for example. the same processing would be used on that scan as would be on an optical image.

if you're thinking switching to optical imaging will increase the amount of data through higher pixel counts, then i'd say there's no obvious reason to use super high res cameras. they're not interested in avoiding mosquitoes.


RE: Easy for you to say.
By maugrimtr on 5/8/2013 8:44:45 AM , Rating: 2
Pictures tend to be 2D representations - you can scan them all day but it has limits. The Human brain is hardwired to analyze images, i.e. depth perception. Computers are not even remotely close to our genius at the task.

The point of a LIDAR system is to build up a partial 3D map of an area by firing lasers at objects (hence the rotation of the sensor) and analyzing the back-scatter pattern. It's a far simpler system computationally, well developed, and therefore extremely reliable. It's used all over the planet for topographic mapping (and in orbit too).

So a car can either a) get lots of 2D photos and try to figure out what they mean using a supercomputer (the one in the meatbag driver's skull), or b) get a set of LIDAR point data that can be converted with well understood statistical analyses to provide a 3D map in realtime.


RE: Easy for you to say.
By dnd728 on 5/8/2013 9:42:14 AM , Rating: 2
Mobileye has been doing this processing for years, on a single chip, with just one camera or several.
iOnRoad has a smartphone app that does roughly that.

I think it can be done, and with less than a supercomputer.


RE: Easy for you to say.
By lelias2k on 5/8/2013 9:55:38 AM , Rating: 2
Great points. I wasn't aware of iOnRoad and found it really cool.

Furthermore, the computational power that we have available nowadays is much higher than what we see in our daily use of a computer.

Think about high-end Xeon processors and graphic cards. We're talking about Teraflops available in one chip. If we can have systems that work with a cell phone chip, imagine with one of those!

And let's not forget that we're still years away from deployment of such technology, which means we'll have chips that are even more powerful.

I think it will be fine. :)


RE: Easy for you to say.
By invidious on 5/8/2013 11:15:24 AM , Rating: 2
Wrong. iOnRoad does not even come close to the level of image processing that would be required for a driving computer. The fact that it only uses one camera should make that abundantly obivious.

iOnRoad only monitors what is happening directly in front of the car. Mapping a 3D environment based on 360 degree snapshots is orders of magnitude more complex than measuring linear distance on one axis.

iOnRoad also does not make decisions, it only does analysis. Taking action requires a vast increase in computing complexity. Accelerating breaking and stearing are not binary choices. Each has a magnitude associate with it and many driving scenerios require manipulating two at once.


RE: Easy for you to say.
By dnd728 on 5/8/2013 1:04:16 PM , Rating: 2
:)
iOnRoad was only an example of how a 2.5 guys startup could pull the image processing on a generic CPU and a generic camera. Still…
Decision making seems rather irrelevant, since it would have to be done for LIDAR and Radars just as well.
Cameras are not one dimensional – they can cover way over 90 degrees (just horizontally), hence covering 360 degrees should not be "orders of magnitude more complex".
Mobileye, that I also mentioned, as I recall, has been able to process full 360 degrees even on their old systems. Their systems are integrated in many luxury cars.
And cameras would probably be installed anyway - for recognizing objects, reading road signs…


RE: Easy for you to say.
By Azethoth on 5/8/2013 12:21:21 AM , Rating: 2
I think the hard issues have more to do with optical illusions. Lasers and Radar do not care about mirages and weird shadows and such. Pure optical systems have to.

This is why you see multiple technologies used. It is not a processing issue when you get a baffling shadow or apparent lake up ahead in the road. It is an algorithm / sensor issue. For now we solve it with sensors. One day we get to Musk's need for pure optical using better algorithms than we have today.


"You can bet that Sony built a long-term business plan about being successful in Japan and that business plan is crumbling." -- Peter Moore, 24 hours before his Microsoft resignation














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki