Print 19 comment(s) - last by customcoms.. on May 11 at 10:38 PM

Los Alamos National Labs has begun taking bids to build the world's fastest supercomputer

Los Alamos National Laboratory (LANL) announced yesterday that it has begun taking bids to upgrade its existing computing power and to begin work on building the world’s fastest supercomputer.

The new supercomputer would be tasked with ensuring that the United States nuclear deterrent program remains operational without the need to detonate live nukes underground to ensure they still work. “LANL currently has some of the most limited computational capabilities of all the DOE laboratories. That will change with this new petaflop computer, which will fill an immediate need to increase the lab’s computing capabilities,” New Mexico Senator Pete Domenici said.

The new supercomputer, dubbed "Roadrunner," will operate at 1 petaflop initially with the ability to scale to 2 petaflops as the project is completed and will cost an estimated $900M USD when all is said and done.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

By spluurfg on 5/11/2006 10:19:11 AM , Rating: 2
Since when can a P4 do 3 to 4 teraflops? More like gigaflops. 4 Teraflops gets you onto the supercomputer top100 list.

So to reach a petaflop, you need 300,000 p4 workstations. I'm going to wager that 300,000 individual p4 workstations will consume a lot more power than a purpose built supercomputer.

Also don't forget that the initial cost of computer systems is usually a fraction of the system's lifetime cost - the rest is spent on maintainence and training/paying people to run it. If it breaks, who will fix it? You?

Also, now that you have 300,000 p4 workstations sitting around, how will you turn them into a supercomputer? To take advantage of those processors at 100%, you're going to need to saturate their bandwidth in the range of several gigabits per second EACH. How would you accomplish this? A standard network? I'm sure the fibre optic interconnects in supercomputers add something to the cost.

You'd also need an operating system that can manage them. Open source linux can scale as well as cray's operating system when dozens of processors are involved, but how about hundreds of thousands?

Finally, AGEIA PhysX might be able to do 100x as many physics calculations per second as a CPU, but those are dedicated physics processors, not general purpose processors. They also may not have the floating point precision they seek.

There's usually a good reason why people do these things. I personally would assume that the people at Los Alamos know more than I do.

For your digestion, check out
#3 can hit 60 teraflops with 10,000 processors. With 300,000, at the same scaling, they should hit 1.8 petaflops.

"I modded down, down, down, and the flames went higher." -- Sven Olsen

Most Popular ArticlesSmartphone Screen Protectors – What To Look For
September 21, 2016, 9:33 AM
UN Meeting to Tackle Antimicrobial Resistance
September 21, 2016, 9:52 AM
Walmart may get "Robot Shopping Carts?"
September 17, 2016, 6:01 AM
5 Cases for iPhone 7 and 7 iPhone Plus
September 18, 2016, 10:08 AM
Update: Problem-Free Galaxy Note7s CPSC Approved
September 22, 2016, 5:30 AM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki