backtop


Print


Jaguar Supercomputer is capable of just over 1 petabyte/s  (Source: Wired)
Prototypes expected by 2018

According to Moore's Law, the number of transistors that can fit on a piece of silicon doubles every 18 to 24 months. Moore's Law isn't fast enough for DARPA and it is handing out grants to some major technology firms to work on breakthroughs in supercomputing performance. What DARPA has in mind from the project is being called exascale computing.

To put that exascale number in perspective, the second fastest supercomputer in the world as of June 2009 is the Jaguar supercomputer used by the DOE Oak Ridge National Laboratory. Jaguar is capable of 1.059 petaflop/s performance. A petaflop is 1,000 trillion calculations per second. DARPA expects the grants and research associated with its exascale project to result in supercomputers able to perform at one million trillion calculations per second and it expects the first working prototype exascale computers to be working by 2018.

DARPA is paying for the project - officially called the Ubiquitous High Performance Computing program (UHPC) - in the hopes that researchers can make significant breakthroughs in power management and processing capability compared to the computers we use today. The massive processing capability that the project aims to provide is needed in the military according to DARPA.

The sheer volume of data expected to be captured by sensors and systems for gathering intelligence in the military today and in the military of tomorrow is staggering. The chips designed by the participating companies are expected to use "dramatically" less power per calculation than designs of today.

A statement released by DARPA said, "[Grant recipients will] develop radically new computer architectures and programming models that are 100 to 1,000 times more energy efficient, with higher performance, and that are easier to program than current systems."

Companies that received grants from DARPA for the project include NVIDIA, Intel, MIT, and the Sandia National Laboratory. NVIDIA has been using its Fermi-based GPUs for next generation supercomputing for a while and the ORNL is already committed to using Fermi GPUs in its next generation supercomputer.





"It's okay. The scenarios aren't that clear. But it's good looking. [Steve Jobs] does good design, and [the iPad] is absolutely a good example of that." -- Bill Gates on the Apple iPad













botimage
Copyright 2017 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki