backtop


Print 31 comment(s) - last by SilthDraeth.. on Mar 18 at 12:41 AM


Eddie, pictured left, is the RPI researcher's creation. He behaves realistically and implements human-like psychology. He can understand complex concepts like beliefs, and has his own wants.  (Source: RPI/Second Life)
Researchers simulate childhood thought process, further blurring the line between artificial intelligence and biological intelligence

While some skeptics, such as Apple-cofounder Steve Wozniak, dismiss artificial intelligence insisting that robots will never be able to reach a human level of thinking process and behavior, the reality is that artificial intelligence is fast approaching human level thought process.  Battlefield robots are making life and death decisions, and an international panel recently met to discuss whether robots could be tried for war crimes

In vehicles, DailyTech witnessed firsthand the GM-sponsored DARPA robotic driver navigate a complicated course with efficiency matching or surpassing that of a human.  Meanwhile, SRI National works to create DARPA funded robotic assistants which learn and organize thoughts in a human-like fashion.

As robots become more and more human-like, we face the duality of the result. On the one hand, in creating something that is human-like we learn more about what makes us human; on the other hand, by creating a replica of man, the line between human and machine becomes more blurry.  As we enter the future, reality in the virtual world and real world is merging into one.  Scientists already demonstrated the first "mixed reality" systems -- systems in which a virtual and a real world device were indistinguishable.

Continuing along the path of convergence between biology and the digital world, researchers at the Rensselaer Polytechnic Institute (RPI) are developing complex artificial intelligence to control characters in the popular online game Second Life.  These characters will be able to have beliefs, distinguish human and AI characters' beliefs, and manipulate the behaviors of human and AI characters based on these beliefs. 

The team unveiled their first creation, a 4-year old child avatar dubbed "Eddie", at an AI conference.  The avatar not only follows the aforementioned intelligence goals, developing beliefs, but also behaves psychologically like a human child.  Researcher Selmer Bringsjord explains the creation process, stating, "Current avatars in massively multiplayer online worlds — such as Second Life — are directly tethered to a user’s keystrokes and only give the illusion of mentality.  Truly convincing autonomous synthetic characters must possess memories; believe things, want things, remember things."

You won't be seeing a character like Eddie walking around on the street for a little while explains Bringsjord -- Eddie's complex behavior requires the processing power of a supercomputer.  The processing power is leverage to combine traditional logic-based artificial intelligence with computational cognitive modeling techniques.

Understanding, predicting, and being capable of manipulating the behavior of humans is one benchmark of intelligence, and the principles behind how this works in the human mind is known appropriately as the "theory of mind".  The RPI team's research marks one of the largest efforts to date to engineer based on the principles of the theory of mind.  The researchers, implementing the part logic and part math theory, impart on the AI-controlled avatars an understanding of such "human" concepts as betrayal, revenge, and evil. 

Similarly, they employ human-like stages of cognitive development.  For example, Eddie behaves correctly in a false-belief test.  In a typical false belief test a person observers an object, in this case a virtual teddy bear.  When the person leaves the room, another person moves the object to a different location.  Upon the return of the first person to the room, the adult observer expects them to look in the old location of the object, knowing that they don't have knowledge of the move.  However, a child four years old or younger will think that they will look in the new location, not understanding that they couldn't see the move.  In an example of a case where it's right to be wrong, Eddie correctly believed in the "false" location, the proper "human" behavior for a child.

Eddie can also be digitally switched to have adult-like reasoning and make the correct decision.  The reasoning is accomplished by an automated theorem prover.  An interface takes conversational English in Second Life and turns it into formal logic, which is processed by the prover.  A video clip of Eddie in action can be viewed here

The RPI research is sponsored by IBM.  The RPI team's final goal is to place humans in a Star Trek-like holodeck filled with projected virtual characters with human-like behavior.  The researchers say that they could accomplish such a simulation in theory by leveraging the processing power of RPI's Computational Center for Nanotechnology Innovations (CCNI) and the Experimental Media and Performing Arts Center (EMPAC). 

With over 100 teraflops of computing power, the CCNI is the most powerful university supercomputer in the world.  It is composed of massively parallel Blue Gene supercomputers, POWER-based Linux clusters, and AMD Opteron processor-based clusters.  And soon, it may be thinking, just like humans, if the RPI team continues in its success.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Inadvertent Leading...
By Xodus Maximus on 3/17/2008 12:54:21 AM , Rating: 5
I looked at the videos and skimmed the docs, and I have to cry foul at the conclusions being drawn.

Does the AI have its own concept of what a briefcase is, heck what bounds are given to an object, what it means to move an object. I very much doubt that.

Instead the "AI" was programed with the concepts to be used in their tests, through the eyes and minds of the programmers. Sadly leading to the results stated because of the methods in which the views were presented, not because of its own accord and reasoning capability.

Such research with far drawn conclusions angers me to no end, because although it may serve as a way to get a predictable semi-autonomous response IT IS NOT AI , it is the same "canned response" garbage we always seem to get from typical media coverage of the subject of AI...




RE: Inadvertent Leading...
By xsilver on 3/17/2008 1:01:58 AM , Rating: 5
exactly, the real test comes when it is asked am open ended question it is not programed to respond to.
does it say:
1) does not compute?
2) self destruct?
3) kill all humans?


RE: Inadvertent Leading...
By BladeVenom on 3/17/2008 5:42:35 AM , Rating: 5
I agree. They are still nothing more than calculators. The programming has become a lot more complex, and they calculate a lot faster, but they still don't have any human like intelligence.


RE: Inadvertent Leading...
By daftrok on 3/17/2008 4:17:58 PM , Rating: 4
Or as the media would spin those responses:
1) Does it feel confused?
2) Suicidal?
3) Homicidal?


RE: Inadvertent Leading...
By captchaos2 on 3/17/2008 1:14:36 AM , Rating: 3
Agreed. Until someone writes a program that can learn from mistakes and adjust its own behaviors, form new ideas/concepts, and develop its own personality based on its own experiences, I wouldn't call it an AI.


RE: Inadvertent Leading...
By smitty3268 on 3/17/2008 3:47:38 AM , Rating: 2
While I agree this isn't AI like the article is making it out to be, there have been gaming AIs that have learned from their mistakes for a long, long time now. Really, the AI you are talking about is just the next level up, one that is able to respond to many different situations rather than the specific ones they're able to handle now. But that could be quite a ways in the future - progress in AI has always come much more slowly than the researchers working on it expected, and they've started to appreciate exactly how complex the human mind really is.


RE: Inadvertent Leading...
By omnicronx on 3/17/08, Rating: 0
RE: Inadvertent Leading...
By omnicronx on 3/17/2008 8:46:28 AM , Rating: 2
Furthermore, they have been able to change the thought process to act more like a child, or more like an adult. The robot will end up with a different independent result, that depends on the current 'age setting' and can not merely be an a, b , or c answer. It robot adapts, thats a big step towards A.I in my mind.


RE: Inadvertent Leading...
By Xodus Maximus on 3/17/2008 10:24:14 AM , Rating: 5
quote:
It takes different variables into account, before it makes a decision


So does every other computer program out there (they take variables and 'reason' the way the programmer wants them to, to achieve the result initialy desired), my point is that the way it comes to the decision is fixed, more appropriately scripted if you will.

They did nothing more than reinvent a method of scripting with a custom interpreter. The age factor is nothing more than one of those fixed variables in a fixed algorithm, sorry.

Let me explain what I mean better, in their example, you could "ask the question" 1 million times, and get the same response, tweaking the age factor changes the response, because the researches designed it so.
What should happen is that after a few times it should give the opposite answer, a child would do so automatically, an adult would ask "why do you ask again, was the first answer not correct?".
My point is that with their system you can program it to behave this way, but given the first program, the second part should be natural if it was a true A.I modeled from human thought.


Misconception on AI
By mkalinski on 3/17/2008 5:52:54 AM , Rating: 4
Some of you have too many Hollywood concepts of AI.

Put simply AI is an agent that is able to react to stimulii without preset programs (ie not hard coded 1+1 = 2, but rather has to learn ... much like we do in 1st grade!), but rather given the ability to learn from inputs.

AI agents have been around for a long time, using Kahonen maps and 3+ layer neural node networks its amazing what you can "teach" a program to learn.

Anyway, I guarantee that within 15-20 years our world will be filled with AI. Most I presume will take on mundane roles, driving, bin pickup, street cleaning, etc etc.

But yeah stop thinking T2, or other Hollywood crap, but think of much needed automation that you don't have to continually "update" as it adapts and learns around you.. Say a "Vacuum bot" for example may learn that when you are not home it will go and clean, when you get home it will go recharge, it will learn where you put furniture, it will adapt when furniture is moved, it will learn where the usual dirty spots are and frequent these more... etc etc..

I CAN'T wait... I always said I will know when the future comes either when i see flying cars or robots taking over some mundane roles




RE: Misconception on AI
By pxavierperez on 3/17/2008 7:11:47 AM , Rating: 2
I agree with you. Honda's humanoid form robot was quite impressive.


RE: Misconception on AI
By v1001 on 3/17/2008 7:15:40 AM , Rating: 2
Yeah A cleaning robot would be awesome. I don't know we could call it something cool like 'roomba'


RE: Misconception on AI
By darkpaw on 3/17/2008 9:30:14 AM , Rating: 2
As soon as they were smart enough to move all my kids crap off the floor then clean it, I'd buy one in a heartbeat.


Not as smart as a human!
By SeeManRun on 3/17/2008 12:14:51 AM , Rating: 1
I robot will never be as smart as a human until it can disobey orders. And then they will be useless to us.




RE: Not as smart as a human!
By SeeManRun on 3/17/2008 12:15:33 AM , Rating: 2
A robot will never be as smart as a human until it can disobey orders. And then it will be useless to us.


RE: Not as smart as a human!
By DJMiggy on 3/17/2008 12:29:49 AM , Rating: 3
Boy you could say that again! Wait a minute?! hehe


RE: Not as smart as a human!
By waltzendless on 3/17/2008 2:24:00 AM , Rating: 2
think: The Matrix


RE: Not as smart as a human!
By Samus on 3/17/2008 3:04:51 AM , Rating: 1
Do you guys think anyone is ganna see SkyNet coming or am I just going to be incinerated in my sleep any night now when the bombs drop? ;)


elementary, dear data
By Gul Westfale on 3/17/2008 12:15:34 AM , Rating: 2
it is only normal that software will take advantage of increased hardware power. how long it will take for that hardware to become as powerful as the brain (and maybe it's not the raw power, but how it all interconnects?) is something best left for scientists to discuss, but i do believe that we will see a "real" AI that can adapt and learn and evolve in my lifetime. at least i hope so.

and cut it out with the lame skynet jokes. terminator got bad after T2.




RE: elementary, dear data
By bryanW1995 on 3/17/2008 12:42:06 AM , Rating: 3
yeah, but it got really good with the sarah connor chronicles...that chick is smokin' hot!


RE: elementary, dear data
By tubalcain on 3/17/2008 12:45:41 AM , Rating: 2
Wasn't she the Queen of Sparta?


RE: elementary, dear data
By pxavierperez on 3/17/2008 7:04:21 AM , Rating: 2
oooh, no wonder she looked familiar.


RE: elementary, dear data
By omnicronx on 3/17/08, Rating: -1
Get a life, er... Second Life...
By CyborgTMT on 3/17/2008 1:50:09 AM , Rating: 5
So soon people with no lives can interact with things that have no life.

I can't wait for the news headline of one of these people to apply for a marriage license with an AI.




By Clauzii on 3/17/2008 7:43:48 AM , Rating: 2
Amen to that!


Come on....
By clovell on 3/17/2008 2:16:09 PM , Rating: 5
Jason,

I've notice a pattern ofer the last 2-3 months of your writing sliding ever further into a pattern of sensationalism. I'm not trying to bash you here, but I felt like it needs to be pointed out. Here's just a couple examples from this article:

> ...the reality is that artificial intelligence is fast approaching human level thought process.

No, it's not. If you read Mr. Asher's article, and accept the premise that roaches and people are very different, this is easy to see.

> Battlefield robots are making life and death decisions

Again, No. Mr. Hill's article plainly states that such robots are being controlled by a soldier, who is making the decisions.

The negative spin you put on the perils emerging AI technology just seems a bit over the top. It's getting to where I can pick out an article as being written by you in the first 2-3 sentences, if not at just the title alone.

You've seemed to stop taking the time to respond to any critiques of your articles, as well. I know you're writing a lot more for DT than you used to, but at what point does quality become more important that quantity?

Respectfully.




RE: Come on....
By SilthDraeth on 3/18/2008 12:41:23 AM , Rating: 2
I have to agree. This is a tech news site. The people who frequent here probably read as many articles as they can. On subjects they wish to learn about. I know I do.

I often open Anandtech, middle click about 10 articles. Then browse to DailyTech and click a ton more articles, not to mention all of the related news articles.

I though care more about accuracy of the information over sensationalism. If I want to read sensationalist titles I can always browse to CNN or Fox or just about any other news website.


The AI Operating System?
By jkowen on 3/17/2008 5:32:09 AM , Rating: 4
Anyways..., I hope those super AI computers will work on Vista ! , "The program KILL HUMANS need your permission to continue".




By SlyNine on 3/17/2008 1:11:30 AM , Rating: 2
Umm, SkyN... ahh nevermind.




intelligence
By DeepBlue1975 on 3/17/2008 1:30:17 PM , Rating: 2
Intelligence comes more from the ability to perceive the environment and react to it, in such a way that even when massive environmental changes occur, the individual still can answer to the new condition in an adaptive manner (in contrast to a disruptive manner).

Hmm... My definition has a huge problem: from that point of view, most humans are not intelligent at all as they complain about changes and try to return things back to the old scheme instead of trying to adapt to the new one :D (J/K)




Second Life Players
By Yawgm0th on 3/17/2008 10:03:49 AM , Rating: 1
Please change this to your desktop background if you play Second Life:
http://icanhascheezburger.files.wordpress.com/2008...

I apologize for the use of a lolcat, but it is very prudent in this case.




"And boy have we patented it!" -- Steve Jobs, Macworld 2007

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki