Google's "Unsupervised" Self-Learning Neural Network Searches For Cat Pics
June 26, 2012 2:30 PM
comment(s) - last by
(Source: Pop Kitten)
Magic happens at Google Inc.'s (
) X Laboratory, a secret research and development center located at an unknown location in the Bay Area of Northern California. Some projects previously disclosed have been the
augmented reality "Google Goggles"
. Google X is even rumored to be working on
a space elevator
I. AI Neural Net Loves Cats
But one of the most fascinating -- and perhaps frightening -- Google X accomplishment has been its creation of one of the world's largest self-learning
"unsupervised" neural networks
. Consisting of 16,000 computer processors, the array is capable of complex task that are considered impossible using traditional algorithms. One such task is finding cute cats on the internet.
As a test of the nascent cognizant system,
Electrical Engineering Professor
Andrew Y. Ng
and Google fellow Jeff Dean fed the machine 10 million thumbnails of YouTube videos. Without being told exactly what to "look for", the network began to hierarchically arrange data, removing duplicate similar features and group certain images together.
One example was the cat. Thanks to the wealth of cat videos on YouTube, the cyber-brain eventually came to a single dream-like image representing the network's knowledge of what a cat looks like. The network was able to then able to recognize its favorite thing -- cat videos, no matter what subtle variations merry YouTubers come up with to their feline's appearance.
The "cat neuron" holds the learned appearance of what a cat looks like.
[Image Source: Jim Wilson/The New York Times]
The significant part, say researchers, is that the network wasn't told what to look for.
in an interview in
New York Times
, "We never told it during the training, ‘This is a cat.' It basically invented the concept of a cat. We probably have other ones that are side views of cats."
II. Future Systems May Match or Beat Human Brain
Google researchers believe this capability is due to the fact that the network operates similarly to the visual cortex in the human brain. The visual cortex is thought to contain so-called "grandmother neurons", which store key images, such as your love ones' faces. The system developed an idea of what a human face looks like, though it lacked the specificity of known faces stored in the human visual cortex.
The system taught itself what a human looks like. [Image Source: Google]
Dr. Ng describes, "A loose and frankly awful analogy is that our numerical parameters correspond to synapses [in the human brain]."
He says that despite that the network learned what a cat looks like and many basic human features, that it still had far less connections ("synapses") than a human brain. In short, mankind is still winning versus his digital counterpart. Writes the team, "It is worth noting that our network is still tiny compared to the human visual cortex, which is a million times larger in terms of the number of neurons and synapses."
David A. Bader
, executive director of high-performance computing at the
College of Computing, though, says that the team's findings indicate that mankind's era of superiority will be short-lived. He comments, "The scale of modeling the full human visual cortex may be within reach before the end of the decade."
In a difficult test of recognizing 20,000 images, the system performed better than any machine to date, post-learning. The final accuracy was 15.8 percent, 70 percent better than the previous record-holder.
The work was
29th International Conference on Machine Learning
in Edinburgh, Scotland.
The project is now headed out of the top-secret lab and into Google's server farms. Applications that it may be used in include improving results in Google's image search and adaptive speech recognition for Android mobile devices.
But Professor Ng has his sights set on a far more ambitious goal -- a machine that is capable of fully learning, developing into a
fully sentient digital system
. To get there he'll need to wait for the never-ending process of hardware improvements to reach a bit further and he'll also have to work on the fundamental algorithms.
The Google X system is close, but not quite there. He states, "It’d be fantastic if it turns out that all we need to do is take current algorithms and run them bigger, but my gut feeling is that we still don’t quite have the right algorithm yet."
The New York Times
This article is over a month old, voting and posting comments is disabled
RE: The only way
6/26/2012 5:36:54 PM
It seems to me that it identified what was a cat without being told what a cat would look like.
RE: The only way
6/26/2012 8:47:14 PM
Yes, that is the point. It created its own parameters. That's the only way to make real intelligence, as that's the hallmark of what intelligence is.
I was thinking about the future of doing this on a single machine, or a robot. They won't remotely challenge humans at our current tech level, but be cool if a single device could do at least this basic bit of learning.
RE: The only way
6/27/2012 3:24:12 AM
A single device has been pretty able to do basic learing for about 20 years now. The google-farm merely needs thousands of processors because it is fed with images; or worse videos. This is a very inefficient method of handing data over to a neural network, as you send millions (or billions in case of vide) of pixels just to transport a few hundred pieces of information like what is in the image, do I know it, is it funny etc.
What surprises me is that everybody seems to discuss intelligence in the comments, but there is no intelligent action of the system described. It merely managed to group videos by correlating their content. I would say google is about 10 years behind facebook on this one, as theirsystem can differentiate between different human faces already, and not merely identify everybody as human.
RE: The only way
6/27/2012 2:18:19 PM
I think you are missing the point. Facebook has algorithms to identify different people in pictures, Google X is about an algorithm that gives the software the ability to identify whatever, without specific guide lines being entered which is way more complex.
"Paying an extra $500 for a computer in this environment -- same piece of hardware -- paying $500 more to get a logo on it? I think that's a more challenging proposition for the average person than it used to be." -- Steve Ballmer
Report: Wearable Android Google Goggles Will Debut in 2012 for Under $600
February 22, 2012, 11:30 AM
IBM Researchers Develop Neurosynaptic Chips for Cognitive Computing
August 18, 2011, 11:44 AM
Google Developing Self-Piloted Cars
October 11, 2010, 7:22 AM
Leading Neuroscientists Blasts DARPA Cat-Brain Project Calling it a "Scam"
November 24, 2009, 5:18 AM
Carefully Timed Rhythmic Jerks Could Drive Space Elevator
January 6, 2009, 8:15 AM
Twitter Senior VP: "Diversity is Important, But We Can’t Lower the Bar"
November 9, 2015, 9:59 AM
CNN Resorts to Internet Censorship to Promote Clinton Over Senator Sanders
October 15, 2015, 2:47 PM
Breaking Bad: How to Crash Google's Chrome Browser With Just 8 Characters
September 23, 2015, 11:08 AM
Quick Note: Amazon UK Offers £10 Back on Any Order £50 or Over
August 3, 2015, 12:05 PM
Editorial: Reddit Allows Itself to be Hijacked as a Hate Platform For Racist Bigots
July 21, 2015, 6:32 PM
Mozilla and Facebook to Adobe: It's Time to Kill Flash
July 20, 2015, 6:30 PM
Latest Blog Posts
Sceptre Airs 27", 120 Hz. 1080p Monitor/HDTV w/ 5 ms Response Time for $220
Dec 3, 2014, 10:32 PM
Costco Gives Employees Thanksgiving Off; Wal-Mart Leads "Black Thursday" Charge
Oct 29, 2014, 9:57 PM
"Bear Selfies" Fad Could Turn Deadly, Warn Nevada Wildlife Officials
Oct 28, 2014, 12:00 PM
The Surface Mini That Was Never Released Gets "Hands On" Treatment
Sep 26, 2014, 8:22 AM
ISIS Imposes Ban on Teaching Evolution in Iraq
Sep 17, 2014, 5:22 PM
More Blog Posts
Copyright 2016 DailyTech LLC. -
Terms, Conditions & Privacy Information