Print 19 comment(s) - last by customcoms.. on May 11 at 10:38 PM

Los Alamos National Labs has begun taking bids to build the world's fastest supercomputer

Los Alamos National Laboratory (LANL) announced yesterday that it has begun taking bids to upgrade its existing computing power and to begin work on building the world’s fastest supercomputer.

The new supercomputer would be tasked with ensuring that the United States nuclear deterrent program remains operational without the need to detonate live nukes underground to ensure they still work. “LANL currently has some of the most limited computational capabilities of all the DOE laboratories. That will change with this new petaflop computer, which will fill an immediate need to increase the lab’s computing capabilities,” New Mexico Senator Pete Domenici said.

The new supercomputer, dubbed "Roadrunner," will operate at 1 petaflop initially with the ability to scale to 2 petaflops as the project is completed and will cost an estimated $900M USD when all is said and done.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

By lemonadesoda on 5/11/2006 9:54:39 AM , Rating: 2
and a single pentium 4 3.0GHz is approx = 3 to 4 teraflops.

Therefore a nest of 300, 100% efficient pentium 4 3.0Ghz would meet this calculation target. In practise, there is a lot of inefficiency in scheduling and this probably wouldn't match the 1 petaflop target. But the cost would be, say, 300 x $2000 = $600,000. Much less than $1M and nowhere near $90M.

BUT wait a minute , isn't the recently announced AGEIA PhysX supposed to be 100x as powerful as a pentium 4 3.0GHz at physics calculations? ie. One PhysX has been marketed to be (although on different calculation terms) circa 200-400 teraflops.

That means I could build one Pentium 4 workstation board, with each PCI slot (ie 5 of them) filled with an AGEIA PhysX, and a nice GPU so demonstrate the "boom" in real time graphics.

Now that would cost me, (and being very generous with my budget), $2000 for workstation board CPUs RAID and HDD subsytem, $500 for GPU and 5 x $300 for PPU. All in $4000.

The net rip-off to the US taxpayer = $90,000,000 - $4000 = $89,996,000.

Someone please tell me where I got my math wrong. Thanks!

By lemonadesoda on 5/11/2006 10:01:11 AM , Rating: 2
Oh man, I wish I had an edit button. I just found out the "wiki" I was on was a horrible European site where they use a "," for a decimal point. Therefore my power calculations are wrong by a factor of 1000. :?( how embarrasing

Ok multiply my buget by 1000x and you get $4M for the consumer solution. A military one-off would be at least 10x a build your own price, so $40M. Therefore $90M allows a little room for the contractors profit margin :-)))

By Zoomer on 5/11/2006 10:31:35 AM , Rating: 2
Excuse me, that sort of miliatary hardware is supposed to be fail proof. Read: More money to make sure it doesn't fail easily.

Besides, PhysX processors are not GP processors, they may not be able to handle all that is necessary.

By spluurfg on 5/11/2006 10:19:11 AM , Rating: 2
Since when can a P4 do 3 to 4 teraflops? More like gigaflops. 4 Teraflops gets you onto the supercomputer top100 list.

So to reach a petaflop, you need 300,000 p4 workstations. I'm going to wager that 300,000 individual p4 workstations will consume a lot more power than a purpose built supercomputer.

Also don't forget that the initial cost of computer systems is usually a fraction of the system's lifetime cost - the rest is spent on maintainence and training/paying people to run it. If it breaks, who will fix it? You?

Also, now that you have 300,000 p4 workstations sitting around, how will you turn them into a supercomputer? To take advantage of those processors at 100%, you're going to need to saturate their bandwidth in the range of several gigabits per second EACH. How would you accomplish this? A standard network? I'm sure the fibre optic interconnects in supercomputers add something to the cost.

You'd also need an operating system that can manage them. Open source linux can scale as well as cray's operating system when dozens of processors are involved, but how about hundreds of thousands?

Finally, AGEIA PhysX might be able to do 100x as many physics calculations per second as a CPU, but those are dedicated physics processors, not general purpose processors. They also may not have the floating point precision they seek.

There's usually a good reason why people do these things. I personally would assume that the people at Los Alamos know more than I do.

For your digestion, check out
#3 can hit 60 teraflops with 10,000 processors. With 300,000, at the same scaling, they should hit 1.8 petaflops.

"Vista runs on Atom ... It's just no one uses it". -- Intel CEO Paul Otellini

Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki