backtop


Print 31 comment(s) - last by SilthDraeth.. on Mar 18 at 12:41 AM


Eddie, pictured left, is the RPI researcher's creation. He behaves realistically and implements human-like psychology. He can understand complex concepts like beliefs, and has his own wants.  (Source: RPI/Second Life)
Researchers simulate childhood thought process, further blurring the line between artificial intelligence and biological intelligence

While some skeptics, such as Apple-cofounder Steve Wozniak, dismiss artificial intelligence insisting that robots will never be able to reach a human level of thinking process and behavior, the reality is that artificial intelligence is fast approaching human level thought process.  Battlefield robots are making life and death decisions, and an international panel recently met to discuss whether robots could be tried for war crimes

In vehicles, DailyTech witnessed firsthand the GM-sponsored DARPA robotic driver navigate a complicated course with efficiency matching or surpassing that of a human.  Meanwhile, SRI National works to create DARPA funded robotic assistants which learn and organize thoughts in a human-like fashion.

As robots become more and more human-like, we face the duality of the result. On the one hand, in creating something that is human-like we learn more about what makes us human; on the other hand, by creating a replica of man, the line between human and machine becomes more blurry.  As we enter the future, reality in the virtual world and real world is merging into one.  Scientists already demonstrated the first "mixed reality" systems -- systems in which a virtual and a real world device were indistinguishable.

Continuing along the path of convergence between biology and the digital world, researchers at the Rensselaer Polytechnic Institute (RPI) are developing complex artificial intelligence to control characters in the popular online game Second Life.  These characters will be able to have beliefs, distinguish human and AI characters' beliefs, and manipulate the behaviors of human and AI characters based on these beliefs. 

The team unveiled their first creation, a 4-year old child avatar dubbed "Eddie", at an AI conference.  The avatar not only follows the aforementioned intelligence goals, developing beliefs, but also behaves psychologically like a human child.  Researcher Selmer Bringsjord explains the creation process, stating, "Current avatars in massively multiplayer online worlds — such as Second Life — are directly tethered to a user’s keystrokes and only give the illusion of mentality.  Truly convincing autonomous synthetic characters must possess memories; believe things, want things, remember things."

You won't be seeing a character like Eddie walking around on the street for a little while explains Bringsjord -- Eddie's complex behavior requires the processing power of a supercomputer.  The processing power is leverage to combine traditional logic-based artificial intelligence with computational cognitive modeling techniques.

Understanding, predicting, and being capable of manipulating the behavior of humans is one benchmark of intelligence, and the principles behind how this works in the human mind is known appropriately as the "theory of mind".  The RPI team's research marks one of the largest efforts to date to engineer based on the principles of the theory of mind.  The researchers, implementing the part logic and part math theory, impart on the AI-controlled avatars an understanding of such "human" concepts as betrayal, revenge, and evil. 

Similarly, they employ human-like stages of cognitive development.  For example, Eddie behaves correctly in a false-belief test.  In a typical false belief test a person observers an object, in this case a virtual teddy bear.  When the person leaves the room, another person moves the object to a different location.  Upon the return of the first person to the room, the adult observer expects them to look in the old location of the object, knowing that they don't have knowledge of the move.  However, a child four years old or younger will think that they will look in the new location, not understanding that they couldn't see the move.  In an example of a case where it's right to be wrong, Eddie correctly believed in the "false" location, the proper "human" behavior for a child.

Eddie can also be digitally switched to have adult-like reasoning and make the correct decision.  The reasoning is accomplished by an automated theorem prover.  An interface takes conversational English in Second Life and turns it into formal logic, which is processed by the prover.  A video clip of Eddie in action can be viewed here

The RPI research is sponsored by IBM.  The RPI team's final goal is to place humans in a Star Trek-like holodeck filled with projected virtual characters with human-like behavior.  The researchers say that they could accomplish such a simulation in theory by leveraging the processing power of RPI's Computational Center for Nanotechnology Innovations (CCNI) and the Experimental Media and Performing Arts Center (EMPAC). 

With over 100 teraflops of computing power, the CCNI is the most powerful university supercomputer in the world.  It is composed of massively parallel Blue Gene supercomputers, POWER-based Linux clusters, and AMD Opteron processor-based clusters.  And soon, it may be thinking, just like humans, if the RPI team continues in its success.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Inadvertent Leading...
By Xodus Maximus on 3/17/2008 12:54:21 AM , Rating: 5
I looked at the videos and skimmed the docs, and I have to cry foul at the conclusions being drawn.

Does the AI have its own concept of what a briefcase is, heck what bounds are given to an object, what it means to move an object. I very much doubt that.

Instead the "AI" was programed with the concepts to be used in their tests, through the eyes and minds of the programmers. Sadly leading to the results stated because of the methods in which the views were presented, not because of its own accord and reasoning capability.

Such research with far drawn conclusions angers me to no end, because although it may serve as a way to get a predictable semi-autonomous response IT IS NOT AI , it is the same "canned response" garbage we always seem to get from typical media coverage of the subject of AI...




RE: Inadvertent Leading...
By xsilver on 3/17/2008 1:01:58 AM , Rating: 5
exactly, the real test comes when it is asked am open ended question it is not programed to respond to.
does it say:
1) does not compute?
2) self destruct?
3) kill all humans?


RE: Inadvertent Leading...
By BladeVenom on 3/17/2008 5:42:35 AM , Rating: 5
I agree. They are still nothing more than calculators. The programming has become a lot more complex, and they calculate a lot faster, but they still don't have any human like intelligence.


RE: Inadvertent Leading...
By daftrok on 3/17/2008 4:17:58 PM , Rating: 4
Or as the media would spin those responses:
1) Does it feel confused?
2) Suicidal?
3) Homicidal?


RE: Inadvertent Leading...
By captchaos2 on 3/17/2008 1:14:36 AM , Rating: 3
Agreed. Until someone writes a program that can learn from mistakes and adjust its own behaviors, form new ideas/concepts, and develop its own personality based on its own experiences, I wouldn't call it an AI.


RE: Inadvertent Leading...
By smitty3268 on 3/17/2008 3:47:38 AM , Rating: 2
While I agree this isn't AI like the article is making it out to be, there have been gaming AIs that have learned from their mistakes for a long, long time now. Really, the AI you are talking about is just the next level up, one that is able to respond to many different situations rather than the specific ones they're able to handle now. But that could be quite a ways in the future - progress in AI has always come much more slowly than the researchers working on it expected, and they've started to appreciate exactly how complex the human mind really is.


RE: Inadvertent Leading...
By omnicronx on 3/17/08, Rating: 0
RE: Inadvertent Leading...
By omnicronx on 3/17/2008 8:46:28 AM , Rating: 2
Furthermore, they have been able to change the thought process to act more like a child, or more like an adult. The robot will end up with a different independent result, that depends on the current 'age setting' and can not merely be an a, b , or c answer. It robot adapts, thats a big step towards A.I in my mind.


RE: Inadvertent Leading...
By Xodus Maximus on 3/17/2008 10:24:14 AM , Rating: 5
quote:
It takes different variables into account, before it makes a decision


So does every other computer program out there (they take variables and 'reason' the way the programmer wants them to, to achieve the result initialy desired), my point is that the way it comes to the decision is fixed, more appropriately scripted if you will.

They did nothing more than reinvent a method of scripting with a custom interpreter. The age factor is nothing more than one of those fixed variables in a fixed algorithm, sorry.

Let me explain what I mean better, in their example, you could "ask the question" 1 million times, and get the same response, tweaking the age factor changes the response, because the researches designed it so.
What should happen is that after a few times it should give the opposite answer, a child would do so automatically, an adult would ask "why do you ask again, was the first answer not correct?".
My point is that with their system you can program it to behave this way, but given the first program, the second part should be natural if it was a true A.I modeled from human thought.


“And I don't know why [Apple is] acting like it’s superior. I don't even get it. What are they trying to say?” -- Bill Gates on the Mac ads

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki