backtop


Print 46 comment(s) - last by CascadingDarkn.. on Mar 2 at 2:01 PM


Swarm-bots spontaneously evolved ways to work together and communicate.
Experiments show that speeding up the evolutionary process can create a new breed of robot function and communicate as a team -- without human interaction

Imagine swarms of small robots, working together autonomously, forming ad hoc groups to accomplish tasks too big or complex for a single bot. That's precisely what some Swiss scientists recently achieved after programming a group of robots to mimic the evolutionary process found in biological colonies of insects such as ants or bees.

Roboticists at the Swiss Federal Institute of Technology in Lausanne collaborated with biologists from the nearby University of Lausanne to demonstrate that robots can spontaneously evolve ways to communicate and interact to accomplish a joint goal. In the demonstration, a group of bots were programmed with an attraction to objects they identified as "food" and an aversion to objects designated as "poison." The objects were clearly visible to the robots from a distance of several meters, but could not be identified until the robot approached within inches of the object.

The robots, which were equipped with colored lights for signaling each other, were programmed with random sets of behaviors, dubbed "genomes" by the scientists. The genomes defined how each robot would process sensory information, and how it would move and operate its flashing lights. The robots were then subjected to a process simulating evolution, in which the genomes of successful robots were recombined and replicated, while genomes of robots that did not perform well were phased out.

After 500 generations of the synthesized "natural selection" process, the robots began to exhibit swarm behaviors, such as alerting each other when they located food or poison. By changing the parameters for success -- giving a lower priority to accomplishing group tasks, for example -- the scientists also observed mutations that included misleading or antisocial behaviors, such as intentionally luring other robots away from food.

The scientists have postulated that the methods they are developing to evolve robot behaviors will be simpler and less time-consuming than programming a bot's every move, eventually producing more sophisticated behaviors than are currently possible through traditional programming methods. The robots used in the project were initially developed for the European Commission-funded Swarm-Bots Project, which conducted experiments involving small robots that worked together to move large objects and navigate difficult terrain.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

terminator isnt that far away....
By TSS on 2/28/2007 3:43:35 AM , Rating: 3
what im more suprised of then anything isn't that they managed to pull it off but how soon they did it.

if they continue down this path and with CPU's getting more and more powerfull i firmly belive it wont take another 30 years....




RE: terminator isnt that far away....
By Hypernova on 2/28/2007 4:17:16 AM , Rating: 3
quote:
the scientists also observed mutations that included misleading or antisocial behaviors, such as intentionally luring other robots away from food.


This concerns me a great deal, this was a very low level experiment and already we are seeing such behaviours. To be honest it came as a complete surprise as I was expect just co-operation between them. Future full scale deployment well need some serious fail safes to prevent such "ghosts in the machine" haunt us.


RE: terminator isnt that far away....
By Hypernova on 2/28/2007 4:28:27 AM , Rating: 2
Seems to be missing a few -ing in there, guess it's time for Google grammar checks...

Just to add, is anyone surprised that we managed to breed evil in such a short time (500 gen)? Perhaps nature is inherently evil.


RE: terminator isnt that far away....
By KristopherKubicki (blog) on 2/28/2007 4:32:43 AM , Rating: 3
Evil? Maybe just brutal.


RE: terminator isnt that far away....
By smaddox on 2/28/2007 2:02:50 PM , Rating: 3
Exactly.

The rogue robots are a side-effect of badly defined rewards. The evolution process is carried out by ranking a robot's success. The higher success ranking a robot has, the more likely his genome is to be used in the next robot.

When you give a robot success "points" for getting the most food, it leads to a competitive robot (and this is most likely exactly what happened). Had the researchers added an algorithm to determine overall success of the colony, rather than the individual alone, they would be much less likely to have rogues.


RE: terminator isnt that far away....
By isaacmacdonald on 2/28/2007 10:52:09 PM , Rating: 3
But then it wouldn't be mimicking natural selection. Most evolutionary biologists agree that the unit of selection for natural organisms is the gene. Group selection (essentially what you are suggesting) is not something that is achievable in nature.

This is terrific work. I'm curious about one thing though, which is how the "good" robots retaliated against the anti-social robots. Obviously, if they failed to evolve some sort of retaliation (such as refusing to signal robots that had proven to be unreliable in the past) then the anti-social robots would overrun the "good" robot population within a hundred or so generations.


RE: terminator isnt that far away....
By masher2 (blog) on 3/1/2007 8:44:30 AM , Rating: 3
> "Group selection...is not something that is achievable in nature."

This is incorrect. Group selection does exist within nature, as any entomologist will tell you. Many studies have been done upon colony insects that demonstrate the conclusion. The only criteria required is that the survival and/or reproduction of the individual is correlated at least loosely with the success of the group.


RE: terminator isnt that far away....
By isaacmacdonald on 3/1/2007 9:04:28 AM , Rating: 2
Actually no. What you're referring to is a very interesting case of kin-selection. A quick survey of bees and ants will demonstrate that the root of all of their magnificent cooperation is genetic relatedness => this is not group selection.

Group selection is when genes for behavior/etc that benefit the group but NOT the individual are thought to be a valid criterion for selection (note, when we say it doesn't benefit the individual, we mean in the absence of similar genes within the population).


RE: terminator isnt that far away....
By masher2 (blog) on 3/1/2007 12:03:03 PM , Rating: 1
Untrue. Group selection exists. The only dispute among biologists is on how much of a factor it exerts. 50 years ago, we thought group selection was of minimal importance...but in recent years, that feeling has changed.

Here's a couple links on research papers related to group selection:

http://www.bbsonline.org/Preprints/OldArchive/bbs....


RE: terminator isnt that far away....
By isaacmacdonald on 3/1/2007 3:15:40 PM , Rating: 2
The link doesn't work (although it is the 2nd hit when you google group selection).

Regardless, while expertise may be required to gather data, it's not required to understand the fundamental objection to group selection. For natural selection to be an effective mechanism for evolution, the unit of selection must be quite stable (though not absolutely). This is precisely why sexual organisms can't be units of selection--reproduction involving a genetic crap shoot introduces significant instability. So for groups to be units of selection, they must be stable. Then it must be the case that behaviors that "benefit the group" but not the individual, are inherently stable WITHIN the group.

Suppose then, that we have such a group, where individuals make some small personal sacrifice for the benefit of the group. Then what would happen if a mutation created an individual within the group that benefited from the altruism of others, but did not make any sacrifice? Their genetic fitness would be greater than that of their altruistic counterparts, and the trait would be selected for, quickly overtaking the altruistic trait, and thus destroying the stability of the behavior within the group.

Now there is a way around this, which is coercion. If group members are able to retaliate against "cheaters" (those who benefit from altruism but don't contribute), then it's possible for the altruism to once again provide advantage and can become stable. But then individual coercion is the keystone to widespread altruism (as I think it is in reality), which implies that the selection is again centered on genes rather than groups. In fact, we see this demonstrated in nature. The extent to which coercion can be exercised often dictates the extent to which non-kin based cooperation takes place. There's a very good argument to be made that this, in fact, accounts for the spectacular success of human cooperation (that is, the unique ability to cheaply coerce within groups).


RE: terminator isnt that far away....
By masher2 (blog) on 3/1/2007 5:25:07 PM , Rating: 1
All of what you say is contradicted, not only by the research and opinions of actual evolutionary biologists, but by nature itself. Look at an ant colony for instance. You'll see millions of individuals, most of whom are making individual sacrifices for the greater good. A sterile worker ant dying to defend the colony isn't receiving any personal benefit from the act...far from it. Nor is spending your entire life feeding the queen conducive to your own personal benefit.

By your scenario, a mutation would result in an ant that didn't defend the colony and/or feed the queen, and its greater genetic fitness would destroy the group. This doesn't happen...and the reasons why are deeply embedded in mathematical biology.

Group selection can and does exist. The selection unit is still the gene...but the selected trait affects the group...sometimes to the detriment of the individual.


By isaacmacdonald on 3/1/2007 9:04:38 PM , Rating: 2
quote:
By your scenario, a mutation would result in an ant that didn't defend the colony and/or feed the queen, and its greater genetic fitness would destroy the group. This doesn't happen...and the reasons why are deeply embedded in mathematical biology.


Uhh no. Hamilton explained hymenopteran behavior in terms of kin-selection more than 40 years ago(here's a brief explanation of it http://brembs.net/hamilton/).

quote:
Group selection can and does exist. The selection unit is still the gene...but the selected trait affects the group...sometimes to the detriment of the individual.


This is oxymoronic. To say that group selection exists is to say that the group is a unit of selection. If you concede that the gene is the exclusive unit of selection (for the organisms in question) than you affirm that group selection does not exist.


RE: terminator isnt that far away....
By vhx on 2/28/2007 4:31:36 AM , Rating: 3
quote:
This concerns me a great deal, this was a very low level experiment and already we are seeing such behaviours.

Same here. I'd be more alarmed if one tried to lure another to the poison. However, even with 'anti-social' behavior occuring in primitive robots perhaps something like Skynet won't be Sci-Fi for very long. A scenario like Skynet isn't too farfetched as humans are confident about control, and that will eventually backfire. Hopefully they will program some fail safes in future robots. If I was someone who had control over it, I would make it manditory.


RE: terminator isnt that far away....
By Hare on 2/28/2007 4:43:31 AM , Rating: 2
Exactly how can mutations happen with copied code unless it was planned? This was not real AI, it was a simulation with predetermined guidelines.

There's no way one of those robots started thinking that "hey, I'll make a practical joke and guide other robots away from the food". It was designed to do that if certain conditions were met.


RE: terminator isnt that far away....
By DublinGunner on 2/28/2007 5:12:22 AM , Rating: 2
Forgive me, but unless you were actually part of the project I think its a little presumptious of to to state to what level the robots were programmed.

quote:
This was not real AI, it was a simulation with predetermined guidelines.


From what I read, it seems like the AI they used in order for the robots to communicate and 'learn' from one another, was fairly advanced. ALthough merely copying code would allow them to learn in a very basic fashion, that has been done many times before, and would hardly constitute a beneficial experiment.

I'm pretty certain the level of AI they used ran a little deeper than what you are presuming.


RE: terminator isnt that far away....
By Hare on 2/28/2007 5:42:02 AM , Rating: 1
I was not a part of the project but I've studied their website. Anyway, I selected my words poorly. What I meant to say was that the robots had a "relatively simple" AI compared to what most people seem to think.

Mutations cannot happen unless mutations were planned to randomly happen. Thus there's no way one robot could simply start behaving in a certain way (isolate itself from the rest of the group etc) unless it was actually planned to do that when certain conditions were met as I said above. There's no way a robot would be self contious and lure other robots for a certain motive. That would require intelligence, not artificial intelligence.


RE: terminator isnt that far away....
By TSS on 2/28/2007 7:31:36 AM , Rating: 4
as far as i can tell from the news message, they basicly told them (comparing to a human) "ok this is your arm, this is how you can move it. thats your other arm, this is how you can move it." then let them move it randomly. eventually one will clap its hands together, then will learn that contact makes a clap. the docs then kill all the bots that didnt learn how to clap, and reproduce those who did learn. that times 500, something pretty advanced can be created.

the bad behaviors where caused by reprioritizing different actions. basicly, they told the bots "ok, now survival of the group is less important then survival of the individual". eventually through trail and error a bot learns then that luring another bot away from food leaves more for him to consume = higher chance of survival (he learns by doing it wrong alot of times and stumbling on the awnser).

its basic logic really. all the bot did was randomly use the "food found" light at a point where there was no food, sensors showed that the other bot moved to that spot while the food remained at the original location, leaving more for the deception bot.

the only thing you should be worried about is these things becomming smarter then humans. that bot basicly created a diversion, the same thing we did to kill a mammoth (create a diversion to move target to a location you thought of in advance) when back in caves, only we can evolve bots a lot faster then we humans did.

i'll be honest and say i didnt read the source, just the message. but logically i think they would use that way of reasoning. after all once a bot has learned something he will never forget, permitted that he isn't one of the bots flagged for extinction.

a final thought: they might be more complex then you might think. after all, it has been demonstrated that when survival is the number 1 priority in no-time at all a bot will do whatever it takes and increase in intelligence to just survive. which is nothing more and nothing less humans did to evolve to where we are now.


By isaacmacdonald on 2/28/2007 11:18:21 PM , Rating: 2
quote:
the bad behaviors where caused by reprioritizing different actions. basicly, they told the bots "ok, now survival of the group is less important then survival of the individual".


This is incorrect. The scientists were interested in mimicking nature, as the article indicates, they never would have made "survival of the group" a priority (though many people mistaken believe this is a viable criteria for selection). The only reason there was cooperation at all was because of simulated genetic relatedness--this is a central product of kin-selection, where benefits accrued to relatives are counted as benefits accrued to the individual*some multiplier of relatedness (eg: 1/2 for direct offspring).

The correct explanation is that these anti-social bots were essentially taking advantage of non-related bots. The behavior was seen repeatedly (ie: not just a random mutation), because deception is a good niche strategy in groups (this also explains its prevalence in human societies).

quote:
only we can evolve bots a lot faster then we humans did.


This is a good point. It's precisely why mimicking evolution is so powerful.


By masher2 (blog) on 2/28/2007 10:58:18 AM , Rating: 3
> "Exactly how can mutations happen with copied code unless it was planned? "

I imagine they use using a genetic algorithm, in which behavior is encoded as a string of bits. Every generation, those bits are "bred" like DNA strings, by mixing part of the strand from each parent, along with a factor which simulates survival rate. There is usually a random mutation factor added as well, where a very few bits are randomly flipped each generation.

> "There's no way one of those robots started thinking that "hey, I'll make a practical joke and guide other robots away from the food"..."

To correct you, it wasn't a "joke", it was an evolved survival mechanism. And actually, this is a pretty common result in a genetic algorithm, and has been documented many times before. The researchers didn't really uncover anything new...GA's are very good at maximizing survival functions.


By Seemonkeyscanfly on 2/28/2007 12:55:47 PM , Rating: 1
Hey man, when number 5 became alive it was not planned. So, the question is which one of these little guys will be the first Johnnie?


By TheGee on 2/28/2007 5:40:26 AM , Rating: 2
This swarm behavior and allocation of resources is just like the Borg but three centuries (guess?) early!!

Beware of what you see in your kid's happy meals!! That would be a great way to replicate and enlarge the 'swarm'.

Implants anyone? Be careful with prothetics that communicate as well!

The will be good and there will be bad but mainly there will be average (and maybe Pareto).

Nature is nature however we reproduce..it's all logical but using priciples not all of which we have discovered yet. (Even by Star Trek writers...although they've done a pretty good job so far!)

PS I'm typing this under my bed!! Sic.


By AlexWade on 2/28/2007 8:10:11 AM , Rating: 2
Read the Michael Crichton book Prey. It is very interesting about this sort of thing.


You know someone had to say it, why not me?
By Toddzio on 2/28/2007 3:59:27 AM , Rating: 3
Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th.




RE: You know someone had to say it, why not me?
By staypuff69 on 2/28/2007 4:40:20 AM , Rating: 2
As long as we have control of the electicity and batteries we should be okay.

Regardless I'm building my shelter and building a "friendly" Terminator just in case...... hey has anyone contacted Arnold yet????


By TSS on 2/28/2007 4:51:54 AM , Rating: 2
i belive arnold isnt able to participate.

case of red-eye :P


RE: You know someone had to say it, why not me?
By CKDragon on 2/28/2007 7:17:13 AM , Rating: 3
quote:
As long as we have control of the electicity and batteries we should be okay.


YOU FOOL! Don't you know what happens when you take away their energy source?

Have fun in your pod farm.


RE: You know someone had to say it, why not me?
By nurbsenvi on 2/28/2007 7:51:08 AM , Rating: 2
We can give tham SONY batteries...
and don't worry we have Oracle.


By wetwareinterface on 2/28/2007 9:24:44 AM , Rating: 1
well i'd also suggest giving them an os written by apple.

that way they could be smug and not worry about the impending virus the humans are working on...


By Seemonkeyscanfly on 2/28/2007 1:02:02 PM , Rating: 1
Yea, but an OS written by the penguin will have the same effect and cost must less.


By Seemonkeyscanfly on 2/28/2007 1:04:26 PM , Rating: 1
Sorry that's much less, not must less.


How could robots possibly ever take us over
By archcommus on 2/28/2007 2:01:38 PM , Rating: 2
I don't understand all the people with fear about robots one day really becoming too smart like in the movies. Two things mainly prevent this from ever happening. First, no matter how intelligent robots may ever seem, it is and will always be artificial intelligence. It is the work of SMARTER beings, us. We programemd them to evolve in the way that they did. So of course, we can always program 100% fail safes, actions that robots will always perform or will always not perform REGARDLESS of what they have learned. The robots will not say "Hey, I have a better chance of surviving if I kill these humans, so let's do it!" because it will be programmed to never override those default actions, and will be programmed to always put the human in higher priority. True, in iRobot they got around this by saying the robots killed humans because it was better for US in the long run, so they were really still looking out for the human race by lessening the population and getting rid of wasteful people, but once again, a simple default action such as "never harm a human" can take care of that right away.

The second reason is that, obviously, everything needs energy. Humans need sleep, water, and food to operate. Likewise robots need energy, either wall power or batteries. That will never change. If a robot attacks you, all you need is one opportunity to remove its energy source and everything is over.




By Seemonkeyscanfly on 2/28/2007 2:39:12 PM , Rating: 1
Your thinking like a rash and normal person. That's not the problem. It's the mad Dr. I'm worried about, the one the make these little robots 4 foot high, able to walk up and down stairs, open doors, and have laser cannons on their heads. Programmed to kill XYZ type of humans. Oh, by the way, nuke waste battery...good for 99 years be time to re-charge. So, while you look for it's battery, he ripping out your main pump right out of your chest.


By Sasuke on 2/28/2007 8:34:46 PM , Rating: 3
problem is with a "never harm a human" command you could never use robot surgeons as they would refuse to cut a person and just have to stand and watch him die witch could for them go against the don't harm program which if is linked to their sense of survival it will think that it if it needs to harm a human to save it it can long slippery slope to our deaths. oh but the robots that will kill us all will be the ones made by the military to kill people.


By isaacmacdonald on 2/28/2007 11:05:45 PM , Rating: 2
quote:
First, no matter how intelligent robots may ever seem, it is and will always be artificial intelligence. It is the work of SMARTER beings, us. We programemd them to evolve in the way that they did.


If we were just talking about faux-intelligent, complex algorithms, you'd be spot on, but we're talking about evolution here. 100% fail safes and so forth are difficult if not impossible to achieve once you've evolved a sufficiently complex intelligence. Also, supposing we can successfully evolve marginally intelligent general AI, there's no compelling reason to believe that it wouldn't surpass human intelligence.

This field is particularly interesting to me. When it comes to general AI, I think that harnessing evolutionary forces is a particularly powerful and promising method.


By Hypernova on 2/28/2007 11:22:18 PM , Rating: 2
And a dangerous method for AI, at some point it is inevitable that what AI emerges from the process becomes so complex it's a black box where you just can't be sure what you are seeing is all there is.

It's time we start thinking how to hard wire the 3 laws in a way that can't be compromised.


By CascadingDarkness on 3/2/2007 2:01:30 PM , Rating: 2
This is really foolish reasoning. If you give them the ability to learn you can't effectively place limits on it. I would assume it's likely somehow these will have connection to internet (it's difficult to isolate any significant hardware these days). It basically has access to all man kinds published knowledge.

You’re telling me it won't be able to find a way to bypass safety guards? People crack encryption daily, by pass complex security and other things similar. You think a machine designed to learn and think at enormous speed isn't going to do this even faster?

The only way I think you could stop this is only develop stupid robots that only do xyz you program them to, basically ones that don't have the ability to learn.

I mean think, say you build in some 3 laws CMOS chip. Robot finds a way to fry the chip without damaging it's other hardware. I thought of that in 3 seconds, I'm sure Mr robot could come up with 50 possibilities in same amount of time.

True initially robots would only be as smart as us initially. But you forget, with enough processing, memory, and storage a robot can be as smart as all mankind combined. How long before they discover things we haven't? Also the team designing safe guards has limited knowledge (unless you make a team that knows everything and works perfectly together). So there will be flaws in safe guards (we're only human). Guess what, Mr Robot know things from best hackers in world.

I'm not point this all out because I think robots will take over the world. I just think it's a possibility, albeit slim one. I think it's much more likely to get rogue robots than everyone thinks. This doesn't even touch what robot programming knowledge could be done in wrong hands.


Software Emulation Anyone?
By Xenoterranos on 2/28/2007 2:19:45 PM , Rating: 1
I don't get why they had to actually build the robots. I mean , you could just as easily run the software through thousands of simulations at super speed than put the software in actuall robots and take up real-time. It seems easier to let the supercomputers handle it, and once the software has evolved to ~10,000 iterations, put it in some robots and see what it does in real life (which should be exactly what it's doing in the simulations)




By Missing Ghost on 2/28/2007 3:31:27 PM , Rating: 2
Where's the fun in that?


By isaacmacdonald on 2/28/2007 11:46:17 PM , Rating: 2
Virtual implementation of this has certainly been done before by ambitious grad students (I know they've been modeling bee colonies at my university for a number of years). There are lots of advantages to this. The most obvious of which is that its relatively trivial to allow structural evolution in a virtual environment, whereas it is prohibitively difficult to do so in a physical environment.


RE: Software Emulation Anyone?
By scrapsma54 on 3/1/2007 8:35:31 PM , Rating: 2
and wait, the super computer will mislead the operators into thinking that it just accessed a nuclear silo's computer.


Clever
By mark2ft on 2/28/2007 1:43:00 PM , Rating: 3
I think the scientists really did something smart here. They were able to basically off-load a huge amount of programming work to the robots themselves. This is ingenious! Although extremely primitive, this kind of approach really makes sense because it really opens the door for self-automation for just about any new task.

They should apply this principle for computer AI in video games. Just imagine! Companies will be able to rely less on hundreds of thousands of beta testers! (You could speed up the game time 100x and have several hundred computers go at it). Such testing would, if done right, quickly reveal defects in the AI. No more needing to wait until a game goes 1.1 or 1.5 to balance out the variables. (RTS games would hugely benefit from this).




RE: Clever
By Seemonkeyscanfly on 2/28/2007 2:25:52 PM , Rating: 1
AI computers (software) do not program themselves. They actually learn for themselves, and then remember what they learned so they know the correct way to handle an issue the next time the same or similar issue comes up.
AI software is nothing real new. The best server for security (preventing hackers and such) was written with AI software over 9 years...No one has hacked one to this day. Only real difference between these AI servers and the robots is that the server can not physically move around on its own.
Still very cool stuff. Interested idea about letting IA beta the gaming software.


RE: Clever
By isaacmacdonald on 2/28/2007 11:24:24 PM , Rating: 2
They have done this. I read somewhere that genetic algorithms were used to come up with very fast, and good approximations of solutions for Traveling Sales Person problems. Of course it's important to realize what can and can't be achieved with this. You wouldn't want to try and generate photo editing software with something like this.


Hmm...
By cessation on 2/28/2007 2:25:59 PM , Rating: 2
I guess I should start making shells...




We have the answer to life now
By alley on 2/28/2007 3:27:09 PM , Rating: 2
Something has to be created/designed before it can evolve. Even artificial life can adapt to its surrounds but for it to adapt it must first be created.




By The Boston Dangler on 2/28/2007 9:07:45 PM , Rating: 2
"the scientists also observed mutations that included misleading or antisocial behaviors, such as intentionally luring other robots away from food."

Bender, are you jacking on in there?




"Young lady, in this house we obey the laws of thermodynamics!" -- Homer Simpson











botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki