backtop


Print 21 comment(s) - last by oTAL.. on Mar 6 at 6:10 AM

MIT researchers create world's first computer model that is able to adequately mimic artificial vision

Scientists in Tomaso Poggio's laboratory at the McGovern Institute for Brain Research at the Massachusetts Institute of Technology have developed a computational model of how the human brain processes visual information that specifically mimics how it recognizes street scenes.  The research could be used to help repair damaged brain functions while helping researchers further understand some of the locked mysteries of the brain.  

The original intent behind the research for Poggio has been to successfully develop a model that would be able accurately portray a visual system that would not only be good for neuroscientists and psychologist but also for purposes related to computer science.  "That was Alan Turing's original motivation in the 1940s.  But in the last 50 years, computer science and AI have developed independently of neuroscience.  Our work is biologically inspired computer science."

In the enclosed image, the Poggio model for object recognition is able to receive input as regular unlabeled images of digital images from a Street Scene Database and will then generate an annotation that detects important parts of the street scene.  The system is also able to detect cyclists, buildings, trees, different roads, and the sky.

One of the biggest drawbacks of better development of artificial intelligence is that the human brain is mysterious and extremely complicated to mimic.  While computers are obviously much faster, humans are smarter -- drawing a bridge between the two has been difficult. 


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: <no subject>
By Cogman on 2/27/2007 3:58:04 PM , Rating: 4
"Computers don't always do what you program them to do. " Umm, I think you need to retake Programming 101. That is the basic principle of programing is giving the computer a set of instructions to follow, no matter how complex the instructions it is always possible to predict what the computer will do next given a set of instructions. The only time a computer does not do what it is programmed to do is when there is a fault in the system, at that does not result in intelligence, instead it results in failure at worst or no noticeable change at best, but not life. A computer NEVER finds a "New Solution" to a problem, they only follow a logical progression based on data input. Even if a program was able to program itself you would still be able to predict what it will write or do based on the input given to it. Now, if somebody could prove me wrong, I would be impressed. But I'm pretty sure a machine that works on binary truths will never actually come up with a gray area reasoning that can't be predicted.


RE: <no subject>
By oTAL on 2/28/2007 5:50:12 AM , Rating: 2
Or maybe you need to actually learn something about AI.
Lets say you give a set of possible operations and an objective to a computer. Then you tell him to learn (yes, this can be done... if you wanna know more investigate on your own) the best sequence of operations to achieve that goal.
The fun part about this is when something unexpected happens. After you investigate and understand the issue you find that it does follow the program... but it wasn't something you programmed the machine to do.

A simple example in AI. You program a vacuum cleaner and instead of giving him random movements or something you give him a goal for him to maximize. You try simple goals... the obvious one would be to have the cleanest house, but that's hard to measure for a machine. What may seem like a solution is to make the "happiest" machine the one that gets the most dirt into its belly in the shortest amount of time / energy. Pretty simple, straight forward stuff. Now what you wouldn't expect is for the vaccum cleaner to start removing dirt from vases... or to bring dirt along from outside, spread it on the floor and clean it.... it respects what you told him he could do.... every boundary.... still, he can achieve unexpected results.
If you don't know anything about AI then just search a little before posting or downrating posts please. It kinda gets old to see funny oneliners rated up to 5, and constructive posts that took some time downrated out of ignorance.


RE: <no subject>
By Cogman on 2/28/2007 8:55:57 AM , Rating: 2
I did not down rate your post (if you look, mine has been uprated.) So don't jump to conclusions just because someone disagrees with you. We arn't here to see who can get the best rating for their post, well, most of us anyways.

Well, I did a quick look at wikipedia to see what the hell you are talking about and, Guess what! Regardless of the nameing of the method or the stratagy used in the method, a learning computer still takes known data, puts it through a programmed algorithim, and then makes a choice based on data recieved. EXACTLY what the programmer tells it to do. Even the most complex systems do this.

You talk about "You give it a goal and then it achieves it" Like this was some mystical set of programming instructions that can easily be written without any problem. You want to know why AI programming is so hard? It is because these "Goals" Have to be specifically programmed. And for the computer to learn, the machine has to be specificly programmed to say something like "If the user does this x amount of times then change strategy to x to compensate". I know about AI, and I also know about programming, so I suggest you learn what you are talking about before you start saying a computer can.

You make it sound like every tim dick and harry can make a program that can go out at learn to conquer the world just by giving them the instructions "Computer, your goal is world domination" But that is not the case in the slightest. The computer has to have every single situation planned for and programmed into it before it can poss more then a threat of blinking lights on a screen. You can't just GIVE a computer a goal. Everything has to be Explicitly stated at one time or another. Again I challenge you to prove me wrong. I went to the Wikipedia, looked up AI and found the exact same information. So Go ahead, where am I lacking in logic here?

One last thing, your "simple example in AI" If it was so simple, why did it take till somewhere like 2002 before any viable commercial product was available, I this is simple AI here it should be fairly easy to make such a thing. Just because a robot learns not to take path X every time, does not mean it is operating in a why th programmer did not expect it to. They know that it will do that because they specifically programmed it to change it's sweeping pattern if it bumps into something. If you think I'm wrong, try pulling out the memory of the robot and observe what path it takes (assuming it still works) Every time it will bump into the same time almost perfectly. The only time it would not is if the programmer had put some random variable in it to make it look like it is making choices.

I believe someone has been watching too much Sci-Fi channel. because you are strictly speaking of some fantasy world where robots can do much more then they really can.


RE: <no subject>
By Cogman on 2/28/2007 9:11:18 AM , Rating: 2
Well, I stand corrected http://www.dailytech.com/Robots+Swarms+Learn+to+Co...

It looks like the swiss have JUST found a way to evolve robots, which is basicly what humans have done. This could be interesting.

However, what I posted earlier, I still stand by it. The actions of these robots could be predicted with enough info. But it will get increasingly hard with each batch.


RE: <no subject>
By oTAL on 2/28/2007 12:48:53 PM , Rating: 2
Dude, you are repeating what I said... plus this is nothing new... the news post is about applying it to swarm like behavior and that has SOME novelty in it (I really liked the experiment and I believe it is very intelligent but it is only slightly evolutionary and nothing like "the swiss have JUST found a way to evolve robots").

I happen to know something about the subject and I am currently one of the members of the current champions team on an European AI competition (a minor league in RoboLudens).
I am by no means a big expert but I do know what I am talking about better than you.
As for the downrating I know it wasn't you (dailytech doesn't allow rating on articles where you post - and removes previous ratings done). I apologize for not having been clearer but some of my 'you's were generic and not pointed at yourself. Sometimes one does that kind of honest mistake since English is not my mother language and I sometimes find it a lot less precise than my mother language (in which such confusions are a lot harder to occur).

Still on the ratings, I am not posting for high ratings, but if you value your time and you still opt to give it away by posting knowledge, then you would wish people to read it - and preferably consider it useful. If I post something intelligent and it gets downrated - which means less people will read it - then maybe I won't be that eager to post next time. Time is money after all ;)


"We shipped it on Saturday. Then on Sunday, we rested." -- Steve Jobs on the iPad launch











botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki