backtop


Print 28 comment(s) - last by Argon18.. on Feb 19 at 11:16 AM

System could be useful for mission critical applications, such as combat robotics

Professor Peter Bentley of the University College of London and his colleague Christos Sakellariou aren't impressed with most everyday computers, which aren't very fault tolerant and can only multitask by flipping their cores between various sequential instruction streams in a program.

He describes in an interview with NewScientist, "Even when it feels like your computer is running all your software at the same time, it is just pretending to do that, flicking its attention very quickly between each program.  Nature isn't like that.  Its processes are distributed, decentralised and probabilistic. And they are fault tolerant, able to heal themselves. A computer should be able to do that."

So the pair set out to make a new hardware and a new operating system, capable of handling tasks differently from most current machines, which even if "parallel" deal with instructions sequentially.

The new machine has instruction set pairs that tell what to do when a certain set of data is encountered.  The instructions-data pairs are then sent to multiple "systems", which are chosen at random to produce results.  Each system has its own redundant stack of instructions, so if one gets corrupted, others can finish up the work.  And each system has its own memory and storage; so "crashes" due to memory/storage errors are eliminated.

Comments Prof. Bentley, "The pool of systems interact in parallel, and randomly, and the result of a computation simply emerges from those interactions."

The results will be presented at an April conference in Singapore. 

The team is currently working on coding the machine so that it can reprogram its own instructions to respond to changes in the environment.  That self-learning, combined with the redundant, pseudorandom nature of the system would make it quite a bit more similar to a human brain than a traditional computer.

Potential applications for such a system include military robotics, swarm robotics, and mission critical servers.  For example, if an unmanned aerial vehicle sustained damage or was hacked, it might be able to reprogram itself and escape errors thanks to the redundancy, allowing it to fly home.

The computer is somewhat similar to so-called "probabilistic" chip designs, which are being researched at other universities.

Source: New Scientist



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Parallelism limited by number of 'systems'?
By UpSpin on 2/15/2013 5:51:47 PM , Rating: 2
So they say that the normal computer runs in a loop and polls the inputs and then processes, if necessary, the required task as fast as possible.
They now use something similar to well known interrupts, which get used in computers since ages. But instead of halting the current calculation to process the interrupt (current method), they send it to several systems (redundant) which do the processing at a random time (mimic nature). There's no 'main task', only several systems which interact with each other and build a working machine.33

But following remains unclear: What are those systems? Individual processors? But this means that the amount of tasks the computer can handle is limited by the amount of processors available. If the computer needs to do more, they won't be able handle it. So this approach looks great in theory but is impracticable. So they could use a more traditional way and store those systems in memory (just as in the article) and now use a pseudo random generator to select a system to get processed. But then the systems don't get processed parallel either but one after another, just in a random order. Even worse, they won't be able to process time critical inputs, because the systems get processed at a random time, which means, in the worst case, after a too long time.

So in short: I don't get it :-) It's redundant, it's independent of a main task, but how do they solve the above mentioned physical limitations. And if the random number generator crashes, the same redundant systems at the same time crash, or if there's a software error the computer will crash, too ^^




By Fritzr on 2/17/2013 10:05:53 AM , Rating: 2
Instead of "totally" random job selection, each processor uses a queue.

When a job is completed, notice is sent to all that received that job and abort or delete from queue is done as appropriate.

Time sensitive jobs get a priority code attached and go to the front of the line.

Multiple priorities get processed in the order received. Do or Die priority can be a separate code and be processed on receipt. Multiples can be processed by timeslicing unless they are "realtime" in which case additional Do or Die processes wait for the processor to be freed up.

Just some quick thoughts with about 3 minutes thought. I am sure the designers of this system have put at least 5 minutes of time into resolving the issues you mention. OS/2 could certainly handle your problems.


"So if you want to save the planet, feel free to drive your Hummer. Just avoid the drive thru line at McDonalds." -- Michael Asher














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki