Think about a picture and what comes to mind is a flat image of something with very little depth. The technology to make actual 3D images has been around a while, but it’s not mainstream and isn’t seen often.
Researchers at Stanford have created a new camera chip that can see in 3D that could lead to better images, especially at higher ISO settings where grain is a big issue. Anyone who shoots with a digital camera that offers adjustable ISO settings has seen the noticeable grain that shows up in images. The quality of the camera will affect how high the ISO setting can go before grainy images are a significant issue.
The new Stanford chip has a three megapixels rating and rather than using one single large sensor, the prototype chip breaks the image up into many small and overlapping 16 x 16 pixel patches known as subarrays.
News.com reports that after the photo is taken using the prototype chip processing software in the camera is able to analyze the slight difference in location of common elements in each of the small arrays. The differences in the position of common elements in each array are used to estimate the distance of an object from another object in the frame, like a wall.
Keith Fife, a researcher on the project, is quoted by CNET as saying, “In addition to the two-dimensional image, we can simultaneously capture depth info from the scene.”
There are currently still several caveats with the sensor technology. The first is that because of the same subject being captured on many pixels, the overall sensor resolution is lower that the raw number on the sensor. The intense processing in the camera required to render the image will shorten battery life and reduce camera performance. The sensor is also only able to record depth information on subjects that have texture and detail.