Print 21 comment(s) - last by oTAL.. on Mar 6 at 6:10 AM

MIT researchers create world's first computer model that is able to adequately mimic artificial vision

Scientists in Tomaso Poggio's laboratory at the McGovern Institute for Brain Research at the Massachusetts Institute of Technology have developed a computational model of how the human brain processes visual information that specifically mimics how it recognizes street scenes.  The research could be used to help repair damaged brain functions while helping researchers further understand some of the locked mysteries of the brain.  

The original intent behind the research for Poggio has been to successfully develop a model that would be able accurately portray a visual system that would not only be good for neuroscientists and psychologist but also for purposes related to computer science.  "That was Alan Turing's original motivation in the 1940s.  But in the last 50 years, computer science and AI have developed independently of neuroscience.  Our work is biologically inspired computer science."

In the enclosed image, the Poggio model for object recognition is able to receive input as regular unlabeled images of digital images from a Street Scene Database and will then generate an annotation that detects important parts of the street scene.  The system is also able to detect cyclists, buildings, trees, different roads, and the sky.

One of the biggest drawbacks of better development of artificial intelligence is that the human brain is mysterious and extremely complicated to mimic.  While computers are obviously much faster, humans are smarter -- drawing a bridge between the two has been difficult. 

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

<no subject>
By Moishe on 2/27/2007 2:35:13 PM , Rating: 5
While computers are obviously much faster, humans are smarter

One of the problem that stands out to me about this comment from the article is that our perception of speed as king comes from the computer's ability to do math very quickly. There is no doubt that computers are faster than humans in raw computation. Computers are computation machines designed for that purpose alone.

If computation alone were all that was necessary to live a real life in a real culture, with real reasoning, computers would excel and we'd be obsolete. Obviously, it takes more than speedy computation to "recognize", to "reason", and to <insert abstract here>.

Computers will never be more than amazing calculators unless we can actually produce *life*. When we can create life, we'll be God. No matter how complex the machine or how convincing the program, computers are just doing what they were programmed to do and no more.

Not that we should stop trying... the entertainment value, the tool value, the pure science of AI alone is worth the effort.

RE: <no subject>
By oTAL on 2/27/2007 2:52:36 PM , Rating: 1
Truth be told, every living being is *kinda* doing what it was *programmed* to do....

Developments in A.I. will eventually lead to creatures with limited intelligence, but intelligent nevertheless...
Right now you could program a bug if you had the time, money, biological information and motivation for that.
You can program by objectives and instincts following layers of behaviors, modeling nature. Still, there'd be little use in making a machine as smart as a bug. When we get to mouse like intelligence... or better yet dog like... that's when we begin to see some uses.

The point here is this:
Computers don't always do what you program them to do. They follow the program, yes. But sometimes they surprise you by finding new solutions which you did not know existed. When the computer becomes "smarter" than the programmer, can you really tell he is only doing what the programmer told him to do? You have simple examples for that - a good programmer can make a tetris program that plays a lot better than himself.... or checkers... or chess... you teach him the goal and the rules.
If you want to create something close to biological intelligence you have to teach the program which are the rules of life and THAT is quite a challenge.

RE: <no subject>
By Cogman on 2/27/2007 3:58:04 PM , Rating: 4
"Computers don't always do what you program them to do. " Umm, I think you need to retake Programming 101. That is the basic principle of programing is giving the computer a set of instructions to follow, no matter how complex the instructions it is always possible to predict what the computer will do next given a set of instructions. The only time a computer does not do what it is programmed to do is when there is a fault in the system, at that does not result in intelligence, instead it results in failure at worst or no noticeable change at best, but not life. A computer NEVER finds a "New Solution" to a problem, they only follow a logical progression based on data input. Even if a program was able to program itself you would still be able to predict what it will write or do based on the input given to it. Now, if somebody could prove me wrong, I would be impressed. But I'm pretty sure a machine that works on binary truths will never actually come up with a gray area reasoning that can't be predicted.

RE: <no subject>
By oTAL on 2/28/2007 5:50:12 AM , Rating: 2
Or maybe you need to actually learn something about AI.
Lets say you give a set of possible operations and an objective to a computer. Then you tell him to learn (yes, this can be done... if you wanna know more investigate on your own) the best sequence of operations to achieve that goal.
The fun part about this is when something unexpected happens. After you investigate and understand the issue you find that it does follow the program... but it wasn't something you programmed the machine to do.

A simple example in AI. You program a vacuum cleaner and instead of giving him random movements or something you give him a goal for him to maximize. You try simple goals... the obvious one would be to have the cleanest house, but that's hard to measure for a machine. What may seem like a solution is to make the "happiest" machine the one that gets the most dirt into its belly in the shortest amount of time / energy. Pretty simple, straight forward stuff. Now what you wouldn't expect is for the vaccum cleaner to start removing dirt from vases... or to bring dirt along from outside, spread it on the floor and clean it.... it respects what you told him he could do.... every boundary.... still, he can achieve unexpected results.
If you don't know anything about AI then just search a little before posting or downrating posts please. It kinda gets old to see funny oneliners rated up to 5, and constructive posts that took some time downrated out of ignorance.

RE: <no subject>
By Cogman on 2/28/2007 8:55:57 AM , Rating: 2
I did not down rate your post (if you look, mine has been uprated.) So don't jump to conclusions just because someone disagrees with you. We arn't here to see who can get the best rating for their post, well, most of us anyways.

Well, I did a quick look at wikipedia to see what the hell you are talking about and, Guess what! Regardless of the nameing of the method or the stratagy used in the method, a learning computer still takes known data, puts it through a programmed algorithim, and then makes a choice based on data recieved. EXACTLY what the programmer tells it to do. Even the most complex systems do this.

You talk about "You give it a goal and then it achieves it" Like this was some mystical set of programming instructions that can easily be written without any problem. You want to know why AI programming is so hard? It is because these "Goals" Have to be specifically programmed. And for the computer to learn, the machine has to be specificly programmed to say something like "If the user does this x amount of times then change strategy to x to compensate". I know about AI, and I also know about programming, so I suggest you learn what you are talking about before you start saying a computer can.

You make it sound like every tim dick and harry can make a program that can go out at learn to conquer the world just by giving them the instructions "Computer, your goal is world domination" But that is not the case in the slightest. The computer has to have every single situation planned for and programmed into it before it can poss more then a threat of blinking lights on a screen. You can't just GIVE a computer a goal. Everything has to be Explicitly stated at one time or another. Again I challenge you to prove me wrong. I went to the Wikipedia, looked up AI and found the exact same information. So Go ahead, where am I lacking in logic here?

One last thing, your "simple example in AI" If it was so simple, why did it take till somewhere like 2002 before any viable commercial product was available, I this is simple AI here it should be fairly easy to make such a thing. Just because a robot learns not to take path X every time, does not mean it is operating in a why th programmer did not expect it to. They know that it will do that because they specifically programmed it to change it's sweeping pattern if it bumps into something. If you think I'm wrong, try pulling out the memory of the robot and observe what path it takes (assuming it still works) Every time it will bump into the same time almost perfectly. The only time it would not is if the programmer had put some random variable in it to make it look like it is making choices.

I believe someone has been watching too much Sci-Fi channel. because you are strictly speaking of some fantasy world where robots can do much more then they really can.

RE: <no subject>
By Cogman on 2/28/2007 9:11:18 AM , Rating: 2
Well, I stand corrected

It looks like the swiss have JUST found a way to evolve robots, which is basicly what humans have done. This could be interesting.

However, what I posted earlier, I still stand by it. The actions of these robots could be predicted with enough info. But it will get increasingly hard with each batch.

RE: <no subject>
By oTAL on 2/28/2007 12:48:53 PM , Rating: 2
Dude, you are repeating what I said... plus this is nothing new... the news post is about applying it to swarm like behavior and that has SOME novelty in it (I really liked the experiment and I believe it is very intelligent but it is only slightly evolutionary and nothing like "the swiss have JUST found a way to evolve robots").

I happen to know something about the subject and I am currently one of the members of the current champions team on an European AI competition (a minor league in RoboLudens).
I am by no means a big expert but I do know what I am talking about better than you.
As for the downrating I know it wasn't you (dailytech doesn't allow rating on articles where you post - and removes previous ratings done). I apologize for not having been clearer but some of my 'you's were generic and not pointed at yourself. Sometimes one does that kind of honest mistake since English is not my mother language and I sometimes find it a lot less precise than my mother language (in which such confusions are a lot harder to occur).

Still on the ratings, I am not posting for high ratings, but if you value your time and you still opt to give it away by posting knowledge, then you would wish people to read it - and preferably consider it useful. If I post something intelligent and it gets downrated - which means less people will read it - then maybe I won't be that eager to post next time. Time is money after all ;)

RE: <no subject>
By crimson117 on 2/27/2007 5:03:18 PM , Rating: 2
a good programmer can make a tetris program that plays a lot better than himself.... or checkers... or chess... you teach him the goal and the rules.

It's not like the computer asks you for the goal and the movement rules, then goes and plays by itself. Otherwise a chess manual text document could play, right?

You still have to program it to analyze millions of possible moves and the resulting outcomes, then use statistics to determine the path to take that will most likely lead to a favorable outcome. That's still just statistical calculation, though. Any human could do it, just a lot slower than a computer.

Even if you teach it to "learn" an opponent's playing style, such as watching what pieces are most often involved in that opponent's wins, and then making it a priority to capture those pieces early on, you've still programmed in that set of rules - the computer just has to fill in the blanks with its experiences. It will never write its own rules and be able to in turn teach them back to you.

The student may be millions of times faster, but the student will never actually surpass the teacher :)

RE: <no subject>
By oTAL on 3/6/2007 6:10:51 AM , Rating: 2
You are wrong... A simple example is checkers... In this simple case it's not about speed... it's about the amount of experience that can be accumulated (the use of such experience is, in a way, a form of 'inteligence'). By teaching a computer how to play and making it play against itself will generate, with a well built program and given enough time, an unbeatable opponent. While the programmer may not know anything other than the main rules, the computer learned, like human do, from its own experience. The difference is that for the computer it is easier to evoke that experience and not repeat the same mistake twice. The human advantage is in understanding and extrapolation - in highly complex environments human can extrapolate 'rules of thumb'. If a human player does a mistake it learns not only to avoid that mistake but also to avoid that 'family of mistakes' encompassing many similar situations with similar outcomes. Computers have a harder time understanding their mistakes and extrapolating them to unexplored situations.

Self Driving Cars
By bobobeastie on 2/27/2007 3:28:56 PM , Rating: 2
I wonder why it wasn't mentioned that this system would be useful for self driving cars. Combine this with GPS and other obstacle avoidance systems and you can play Grand Theft Auto 4 on your way to work.

RE: Self Driving Cars
By TomZ on 2/27/2007 5:19:49 PM , Rating: 2
I think the point of the research is to understand how the mind works, not recognize specific objects. After all, machine vision is pretty advanced, and I would think that picking out items in a scene like demonstrated above using conventional machine vision approaches would be pretty practical. Anybody know more about that subject who could comment?

RE: Self Driving Cars
By msva124 on 2/27/2007 8:42:48 PM , Rating: 2
This so-called breakthrough is nothing new. It's the same pattern recognition NNs that have been in use for years.

“The fact that this system seems to work on realistic street scene images is a concept proof that the activity of neurons as measured in the lab is sufficient to explain how brains can perform complex recognition tasks.”

Complete nonsense, but it'll be sufficient to win more funding. That's how these things work. Run out of money, come out with a faux-breakthrough that's a rehash of something from thirty years ago, get more.

RE: Self Driving Cars
By Belegost on 2/28/2007 1:18:44 AM , Rating: 2
Seriously, how the hell does this make it into Dailytech?

I read the paper, it's a four-layer feed-forward network utilizing two different types of neuron, an averaging neuron, and a max neuron. This is hardly anything groundbreaking. The only place they try anything new is claiming that this matches some neural model - of course the fact that there are dozens of different neural models of varying acceptance floating around doesn't seem to affect them.

This was a paper written to get a publication to ensure grant money - we all do these papers when needed so no blame on them, but it isn't something that needs press coverage.

RE: Self Driving Cars
By therealnickdanger on 2/28/2007 8:55:35 AM , Rating: 2
I think this technology will see itself on the enforcement end of traffic before it ever reaches the automotive world:

A system that when combined with laser/radar will independently and accurately catalogues individual vehicles and their speeds and other driving actions.

Combined with facial recognition software, you got an elaborate criminal detection system anywhere there is a camera.

By soupmoose on 2/27/2007 1:43:02 PM , Rating: 2
Anyone else notice that it labelled the construction sign as a pedestrian? :)

RE: Hmm...
By vdig on 2/27/2007 1:56:43 PM , Rating: 2
Yep. That is supposed to be a pedestrian? What a cut out!

Still, really cool to see such things being developed. Sure beats the alternative - having cars make noise for the blind. I mean, seriously....

RE: Hmm...
By joust on 2/27/2007 3:25:00 PM , Rating: 2
haha well, I suppose you could call it a "pedestrian," in an extremely literal sense. After all, it has feet ("ped").

RE: Hmm...
By Justin Case on 2/27/2007 8:50:47 PM , Rating: 2
There's a pedestrian behind it. The software is using a wallhack.

iRobot getting closer...
By vortmax on 2/27/2007 1:38:41 PM , Rating: 2
These developments are the type that bring smart AI closer to reality. Once they develop the software/hardware to mimic the human senses accurately, then things will begin to accelerate.

RE: iRobot getting closer...
By cplusplus on 2/27/2007 2:03:27 PM , Rating: 2

I'd rather get one of the Haley Joel Osment robots from AI: Artificial Intelligence!

What a huge breakthrough!
By msva124 on 2/27/2007 5:52:25 PM , Rating: 2
It was so good, they had to do a press release about it.

“Then they pop up and say ‘Hello, surprise! Give us your money or we will shut you down!' Screw them. Seriously, screw them. You can quote me on that.” -- Newegg Chief Legal Officer Lee Cheng referencing patent trolls

Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki