backtop


Print 18 comment(s) - last by oTAL.. on Mar 1 at 9:28 AM


Imprinting robots bond with study coordinator Dr. Lola Canamero.
Robots have feelings, too -- or at least they will -- pending the completion of a pan-European research project being led by a group of British scientists

The Feelix Growing project aims to design and build a series of robots that can interact with humans on an emotional level, and actually adapt their behavior in response to emotional cues from their human counterparts.

The official goal of the project is to conduct "interdisciplinary investigation of socially situated development ... as a key paradigm towards achieving robots that interact with humans in their everyday environments in a rich, flexible, autonomous, and user-centered way." To achieve this, the 2.3 million-Euro effort has assembled more than two dozen roboticists, developmental psychologists and neuroscientists from six nations.

The robots used in the project are simple designs, including some "off-the-shelf" models. The complexity lies in the software, which will construct artificial neural networks to pick up on human emotions exhibited via facial expressions, voice intonation, gestures and other behaviors.

The leader of the European Commission-funded project, Dr. Lola Canamero is a senior lecturer in the School of Computer Science at the University of Hertfordshire, England. She also serves as the principal investigator and coordinator of the university's Adaptive Systems Research Group, which focuses on researching socially intelligent agents, emotion modeling, developmental robotics, human-robot and human-computer interaction, embodied artificial intelligence and robotics, sensor evolution and artificial life.

In a recent interview with BBC, Canamero said the robots will be designed to detect basic human emotional states such as  anger, happiness, and loneliness. Once the states are detected, the robots will be programmed to perform a support role, by seeking to soothe an angry human, or cheer up a lonely or depressed one.

One of the first fruits of the Feelix Growing effort has been the production of a robot capable of "imprinting" behavior. The behavior is similar to the way many baby animals develop an instinctual attachment to the first adult they see at the time of birth, usually the mother. The imprinting robot prototype learns to recognize a particular human and follow that individual around, gradually adapting to the human's actions and emotional state, then interacting accordingly, Canamero said.

At the conclusion of the project, the scientists intend to build two robots that will combine aspects of the research being conducted at each of the eight partner sites scattered around Europe.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Little scenario
By Axbattler on 2/27/2007 12:12:27 PM , Rating: 3
The following scenario just ccame into my mind: In a decade or two, we'll have our first (legal) case regarding abused e-bot (emotion-bot). It might not be a case where the robot itself press charges, but an eyewitness of 'robot bullying' pressing charge on what it appears to the person as inrobotic (i.e. inhuman). They'll get a bunch of computer scientists, psychologist etc. to analyse the robots 'emotional damage'. At first, most people will laugh it off, but gradually, a group of robot sympathisers will make themselves known. And by the time we are old, we will be telling our grandchildren how 'Back in the days, the concept of robot right was completely absurd - robots are only created to serve us nothing else' xD;

... but seriously, I think that bots that are capable of detecting human emotions can have their for some people in the same way that some people like to have pets around (companionship). But I wouldn't like parents to start using them to use those as substitute for spending times with their kids - something I can see happening.

I also don't think it is necessary for robots to have emotions themselves. It'll save the scenario I described earlier (written humorously, but I don't find implausible).




RE: Little scenario
By supaflydaddyc on 2/27/2007 12:41:54 PM , Rating: 2
Axbattler,

I don't know when robots will reach the level of functionality that you speak of, but when they do, you're dead-on that there will be activists groups clamoring for robots' rights.

It gives me an uneasy feeling.

I'm only in my early 20's but who knows what the world will be like when I'm in my 60's. Florida could begin to be submerged under the ocean due to global warming (maybe), robots could roam the Earth, maybe the United States and other Western nations successfully switch over to some form of fuel cell for which we can produce hydrogen cleanly (crosses fingers), but the rest of the developing world will still be reliant on oil and fossil fuels, and the countries of OPEC will be at war with one another.

Let's face it, 40 years is a long time, anything can happen. Heck, five years is a long time. Can you say that you are exactly where you thought you'd at this point in your life five years ago?


RE: Little scenario
By UserDoesNotExist on 2/27/2007 5:38:31 PM , Rating: 2
I must point out one thing, namely that I believe your comparision of e-bots and pets is mistaken. The difference is that I don't doubt whether a dog or cat has a similiar emotional capacity as I do, and I don't think I'm alone of this matter. In case you're wondering why I phrased that so awkwardly, it's because I know that there's some philosophy major lurking out there just waiting to prove how smart he is to a bunch of strangers online by posting some smart ass comment about the nature of existance.

As an aside, the more time I spend training my puppy, the more convinced I am of his sentience as well as mine. No matter what the nature shows say about how the dog is "honing his hunting instinct as a pup" or "developing a submissive bond with his pack master", I know he's just playing with me and likes to have me around. To those who say that he's just a biological machine that has been programmed to the point where he appears to be self-aware, well, I'll be the first to admit that I have no really good, surefire rebuttal. That doesn't automatically make me wrong, especially when you consider that one who propagates such an arguement has no good, surefire rebuttal against the "ghost in the machine" arguement.

With a robot, I just don't have that same personal assurance as to the emotional capacity, and once again I don't think I'm alone on in this matter. This doesn't require that I be truly self-aware, just that I think myself to be self-aware and that I hold views of the sentience of other objects in the universe.

You know what? Nevermind the last three paragraphs. Dogs sentient. Me sentient. Robot not sentient. Me like dogs. Me not like robot. HULK SMASH!


RE: Little scenario
By oTAL on 3/1/2007 9:28:12 AM , Rating: 2
I disagree. From someone who has worked with Aibo's I can tell you that, as idiotic as it may sound, one can easily find himself attached to a stupid machine like that.
The major impediment to further attachment is that it is just too stupid. It's like the normal attachment people have to fish (I know some people get very attached to their fish, but you know what I mean... the normal "hey... I care about them...").
People with knowledge on biologically inspired AI have built models around emotions. Emotions and feelings evolved in nature as a tool to give positive and negative feedback on the learning process. This same tool is useful when used in AI models allowing the emergence of a large variety of behaviors. Feelings are a large amount of variables that affect the subject's current "emotional state". That can be copied and adapted to artificial models... Right now, the largest difference between a machine's software and an animals brain is that the machine's software is highly serialized and the animal's brain is highly parallelized. This higher parallelism allows for a much larger amount of possible states and behaviors, and a greater unpredictability. Still, it can be simulated and is, step by step, being adapted.


“Then they pop up and say ‘Hello, surprise! Give us your money or we will shut you down!' Screw them. Seriously, screw them. You can quote me on that.” -- Newegg Chief Legal Officer Lee Cheng referencing patent trolls











botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki