backtop


Print 28 comment(s) - last by Argon18.. on Feb 19 at 11:16 AM

System could be useful for mission critical applications, such as combat robotics

Professor Peter Bentley of the University College of London and his colleague Christos Sakellariou aren't impressed with most everyday computers, which aren't very fault tolerant and can only multitask by flipping their cores between various sequential instruction streams in a program.

He describes in an interview with NewScientist, "Even when it feels like your computer is running all your software at the same time, it is just pretending to do that, flicking its attention very quickly between each program.  Nature isn't like that.  Its processes are distributed, decentralised and probabilistic. And they are fault tolerant, able to heal themselves. A computer should be able to do that."

So the pair set out to make a new hardware and a new operating system, capable of handling tasks differently from most current machines, which even if "parallel" deal with instructions sequentially.

The new machine has instruction set pairs that tell what to do when a certain set of data is encountered.  The instructions-data pairs are then sent to multiple "systems", which are chosen at random to produce results.  Each system has its own redundant stack of instructions, so if one gets corrupted, others can finish up the work.  And each system has its own memory and storage; so "crashes" due to memory/storage errors are eliminated.

Comments Prof. Bentley, "The pool of systems interact in parallel, and randomly, and the result of a computation simply emerges from those interactions."

The results will be presented at an April conference in Singapore. 

The team is currently working on coding the machine so that it can reprogram its own instructions to respond to changes in the environment.  That self-learning, combined with the redundant, pseudorandom nature of the system would make it quite a bit more similar to a human brain than a traditional computer.

Potential applications for such a system include military robotics, swarm robotics, and mission critical servers.  For example, if an unmanned aerial vehicle sustained damage or was hacked, it might be able to reprogram itself and escape errors thanks to the redundancy, allowing it to fly home.

The computer is somewhat similar to so-called "probabilistic" chip designs, which are being researched at other universities.

Source: New Scientist



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By Fritzr on 2/17/2013 10:05:53 AM , Rating: 2
Instead of "totally" random job selection, each processor uses a queue.

When a job is completed, notice is sent to all that received that job and abort or delete from queue is done as appropriate.

Time sensitive jobs get a priority code attached and go to the front of the line.

Multiple priorities get processed in the order received. Do or Die priority can be a separate code and be processed on receipt. Multiples can be processed by timeslicing unless they are "realtime" in which case additional Do or Die processes wait for the processor to be freed up.

Just some quick thoughts with about 3 minutes thought. I am sure the designers of this system have put at least 5 minutes of time into resolving the issues you mention. OS/2 could certainly handle your problems.


"I'd be pissed too, but you didn't have to go all Minority Report on his ass!" -- Jon Stewart on police raiding Gizmodo editor Jason Chen's home














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki