backtop


Print 37 comment(s) - last by fxyefx.. on Mar 18 at 8:03 AM


BlueGene is currently the world's fastest supercomputer
IBM is developing a next-generation supercomputer that will become the world's most powerful computer

IBM plans on building Roadrunner, a next-generation supercomputer that will be located at the Los Alamos National Laboratory in New Mexico.  The computer will have a performance level of 1 petaflop, which is the equivalent of 1,000 trillion calculations per second.  Roadrunner will use a conventional cluster of 16,000 AMD Opteron processor cores alongside 16,000 Cell B.E. chips, with both chips working together to handle a share of the calculating work.

The Department of Energy contacted IBM in September about the need of a next-generation supercomputer that is able to sustain a speed of at least one petaflop.  The computer will cost the Department of Energy $110 million over three years of development.  The Opteron and Cell chip combination should make attempts to lower the overall cost of building the supercomputer interesting.  According to IBM, the Cell B.E. processors will act as the workhorse, completing the major floating point calculations.  The AMD Opterons will act as the system interface processors and as the transactional backbones between the nodes.

Once completed, the Roadrunner will be the world's most powerful computer, easily outpowering the IBM BlueGene/L system, located at the DOE's Lawrence Livermore National Laboratory.  The Blue Gene/L is able to sustain 280 teraflops, only a little more than one-fourth of the petaflop goal of the Roadrunner.

The U.S. DOE initially plans on using the system to handle "a broad spectrum of scientific and commercial applications."  The computer could eventually be used to help the DOE ensure that the nuclear weapons stockpile in the US remains safe and reliable.  Instead of conducting underground nuclear testing, Roadrunner could simulate how the nuclear weapons age.

Designers of the supercomputer are also taking space and power consumption issues into mind while designing the Roadrunner.  IBM has repeatedly said that the system will use advanced cooling and power management technologies to ensure that Roadrunner will be working as efficiently as possible.

The Roadrunner will cover 12,000 square feet of floor space when it is completed sometime in 2008.  IBM plans on shipping the supercomputer to the DOE facility sometime in the Q3 2007 -- it should be fully operational sometime in early 2008.

Hybrid supercomputers, much like the Tokyo Institute of Technology's Tusbame system from Sun Microsystems, show an increasing trend where general-purpose processors are used with special-purpose accelerator chips.  These hybrid supercomputers are able to utilize both Opteron blade servers and Cell-based accelerator systems in one chassis.

Supercomputers are traditionally used for calculations that need high levels of computing power, such as quantum mechanical physics, forecasting weather, mapping DNA and exploring space.  The beefed up computers are used by researchers for advanced simulations that continue to grow in complexity and sophistication.  There has been a recent push to not only make supercomputers more powerful but to also make them much more energy efficient.

More special-purpose supercomputers that have hardware architecture designed specifically to tackle a particular problem.  For example, the Deep Blue supercomputer was designed to give the world's best human chess players a new challenge; the Deep Crack supercomputer was designed for cracking the data encryption standard (DES); and the Gravity Pipe (GRAPE) supercomputer is used for molecular dynamics and astrophysics.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Serious error in the article
By barnie on 2/28/2007 5:23:53 AM , Rating: 5
quote:
Ideally the Opterons would focus on raw performance, with the Cell chips aimed at intercommunication between the processors and additional floating point calculations.
You've got this the wrong way. The Cells are the powerhorses. The Opterons are the network controllers in this machine.




By KristopherKubicki (blog) on 2/28/2007 5:31:03 AM , Rating: 2
You're correct, and we have editted the text. Sorry about the mixup.


RE: Serious error in the article
By barnie on 2/28/2007 5:34:07 AM , Rating: 3
quote:
Typical compute processes, file IO, and communication activity will be handled by AMD Opteron processors while more complex and repetitive elements -- ones that traditionally consume the majority of supercomputer resources -- will be directed to the more than 16,000 Cell B.E. processors.


Source:
http://www-03.ibm.com/press/us/en/pressrelease/202...


RE: Serious error in the article
By edge929 on 2/28/2007 9:36:37 AM , Rating: 2
In my rig, the Opterons are the workhorses!

Glad to see AMD's Opterons still have some life left in them. Why didn't they go with a Xeon Core 2 derivative? I realize they're not out NOW, but they will be in a year. And yes I know about the partnership between AMD and IBM, it would just make sense to use the Core 2 architecture if they want the best performance-per-watt. Perhaps price was factor. I know it's the only factor on why I don't have a Core 2 system right now.


RE: Serious error in the article
By Zurtex on 2/28/2007 10:22:22 AM , Rating: 3
1 word: Bandwidth

AMD CPUs have so so much more than Core2 CPUs, so if you want lots of CPUs to be the negotiating factor in where most the workload goes they need lots of bandwidth.

In fact because of this Opterons make much better CPUs in super computers when they are the ones doing all the horse work than Core2s.


RE: Serious error in the article
By bnme on 2/28/2007 10:22:59 AM , Rating: 3
Besides the fact that IBM works very closely with AMD for chip development, Opterons scale better in multi-processor environments, apparently, and I'm guessing HT might have something to do with that.


RE: Serious error in the article
By Tsuwamono on 2/28/2007 11:31:09 AM , Rating: 2
Basicly yes, its the fact that AMDs MC is onboard while Intels is on the chipset which uses alot of bandwidth that AMD will have free.

Thats the way it was explained to me, correct me if im wrong


RE: Serious error in the article
By vignyan on 3/1/2007 11:11:53 AM , Rating: 2
No dude.. on-board MC does not mean that it can have greater BW.. Ya, HyperThreading sure is... BUT.. HT is more of a point to point high speed connection... Not sure how it will do its job well in acting as a communication processor! There are many NETWORK processors present in the market! If you were to compare, the network processors, its the xeon that would be a better choice to compare. I dont know which one would win coz both are too good.

IBM and using Intel where it can substitute with AMD?? No way! Its the manufacturing partner with AMD.


RE: Serious error in the article
By seafoodbrontosaurus on 3/2/2007 2:14:05 PM , Rating: 2
You have no idea what you're talking about. It's not the onboard memory controller that makes it better, it's the HyperTransport. Intel's FSB architecture would be awful here. I'm not sure why you brought HyperThreading in here as this article has nothing to do with that.


By seafoodbrontosaurus on 3/2/2007 2:23:22 PM , Rating: 2
Both of these posts are directed to you vignyan, please read up on both processor architectures before you try to correct someone with false information, also better grammar would makje your post easier to read. The Core 2 setup has only 8.5GB/s to work with all around, the Opteron setup can have a total of 19.2GB/s coming in and out of each processor, plus the memory controller's separate bandwidth.


RE: Serious error in the article
By Griswold on 2/28/2007 1:34:48 PM , Rating: 2
quote:
Why didn't they go with a Xeon Core 2 derivative? I realize they're not out NOW, but they will be in a year.


Flawed logic. Do you think IBM would equip that thing with todays opterons in a year, instead of barcelona type quads?

quote:
it would just make sense to use the Core 2 architecture if they want the best performance-per-watt


That may change in the not so distant future, but we'll have to see. However, for supercomputer clusters, especially for the task described in the article, there is no reason to assume that a C2D-esque architecture is as dominant compared to the K8 as it is on the desktop. Other factors weight much heavier than integer performance.


RE: Serious error in the article
By vignyan on 3/1/2007 11:16:47 AM , Rating: 2
Ahem!!! well i dont see why a intercommunication processor needs so many FLOP operations! and need a latest QUAD CORE processors!!! guess the flawed logic now! ;)


Petaflop
By JarredWalton on 2/28/2007 10:21:14 AM , Rating: 3
While calling a petaflop "1000 trillion calculations per second", wouldn't it be easier to simply call it "a quadrillion calculations per second"? If not, then we should probably stop using billion and million and just stick to numbers. A teraflop is 1000 billion, 1,000,000 million, or simply 1,000,000,000,000 calcs per second. ;)

/nitpick




RE: Petaflop
By bnme on 2/28/2007 10:27:01 AM , Rating: 2
I'm guessing it is because "quadrillion" isn't used that often in everyday situations, as opposed to "trillion". Easier for the readers to visualize how large that number is.

Besides, quit nitpicking at that grammar.


RE: Petaflop
By Griswold on 2/28/2007 1:43:30 PM , Rating: 2
I dont think joe average has an easier time to visualize "1000 trillion" as opposed to "quadrillion" - both are terms "normal" human beings cant visualize anyway.

If anything, its an issue of not knowing what these terms represent and in what order they are used (yes, its shocking, but thats a problem for many joe and jane averages out there).


RE: Petaflop
By Zurtex on 2/28/2007 1:20:27 PM , Rating: 1
Calling it a quadrillion calculations per second would be slightly ambiguous as it can mean 2 different things. To me a quadrillion means a million million million millions, or 1'000'000'000'000'000'000'000'000.

The only real use for such terms is in computing anyway, people need to be able to know how much power something has relative to something else, yet a lot of people don't deal with large numbers normally so they need a noun to shorten it down. In science and mathematics when dealing with terms of this magnitude you simple say something like 10^12 or 3.2 * 10^30, as a couple of examples.


RE: Petaflop
By Arthur on 2/28/2007 7:03:52 PM , Rating: 2
Well, it would obviously be far better to translate that in the power of ten representation. After searching for two minutes, it finally got it: a petaflop is 1e15 flop(s)! Billion, quadrillion,... have no unambiguous meaning in English (according to Wikipedia).


IBM and Sony, Cell and Opteron
By JarredWalton on 2/28/2007 10:32:32 AM , Rating: 1
I think it's rather telling that even IBM, the chief architects of Cell, are not using pure Cell chips as the basis of Roadrunner. No matter what Sony might like to claim, getting a design such as Cell to perform well in all cases is basically impossible. Hybrid chips will likely be the next step for CPUs, but until then top supercomputers can take a hybrid approach to the clusters. Hopefully we won't get any "PS3 is a supercomputer on a chip" claims from Sony this time around... if only they could manage to ship enough systems. *cough!* [see http://www.dailytech.com/article.aspx?newsid=6264]




RE: IBM and Sony, Cell and Opteron
By Carl B on 2/28/2007 1:30:50 PM , Rating: 3
Uh... basically the Opteron's are subbing in here for the PPE core located on the Cell processor; that element has always been known to be the weak link in the chip. The SPEs are very much 'super computer' worthy, as demonstrated here via this project.

Cell is *already* a hybrid chip - you know that, right? It's simply that for cost/power reasons@90nm, the Power core could not be as full featured as it otherwise might have been to keep the SPEs better fed/coordinated. In Roadrunner, that task will fall to the Opterons. All the super 'computing' though will very much be done by Cell.


RE: IBM and Sony, Cell and Opteron
By JarredWalton on 2/28/2007 2:04:45 PM , Rating: 2
My point being the PPE in the Cell is basically junk. In-order architectures are fine few FP intensive work. For general computing algorithms (which are used to feed the SPEs), I'm not at all surprised that the PPEs are being mostly ditched in favor of doing that routing work on Opterons. The PPE will basically be relegated to simplified routing on a chip level, because it can't easily handle much else.

Yes, Cell is a hybrid chip, but barely. It's like slapping a fast 486 into a processor and hoping that's sufficient. If you don't think the Opterons need to do a lot of work, why are there 16000 or them (1:1 with Cell chips)? Basically, the Cell chips do the brute force work, but all of the coordination still takes a lot of processing time. I'm guessing those are dual core Opteron chips as well, which means alone they still pack a lot of computational power.


RE: IBM and Sony, Cell and Opteron
By Some1ne on 2/28/07, Rating: 0
By Frumious1 on 3/1/2007 10:06:51 AM , Rating: 2
If the SPE is junk, the PPE is even worse. The SPEs at least have raw computational power. The PPE is garbage. Not sure why those posts above got modded so harsh; guess the PS3 fanboys can mod this down as well?


Is it really necessary?
By iollmann on 2/28/2007 9:45:39 PM , Rating: 2
How much money would they save if they invested a couple million into optimizing the software to run on an existing installation vs. the cost of this machine.

I am often amazed by the propensity of organizations to spend lots of money on hardware to solve what are often software problems. Cell BE is not a particularly good platform to solve software problems with either. It is more likely to require custom software to be written to take advantage of the SPEs. I predict the opteron cores get used for more than just routine system tasks.




RE: Is it really necessary?
By SmokeRngs on 3/1/2007 3:00:58 PM , Rating: 2
quote:
I am often amazed by the propensity of organizations to spend lots of money on hardware to solve what are often software problems. Cell BE is not a particularly good platform to solve software problems with either. It is more likely to require custom software to be written to take advantage of the SPEs. I predict the opteron cores get used for more than just routine system tasks.


Most supercomputers are built to certain specifications depending on their use. One supercomputer may be good at one thing but practically useless on many other tasks that other supercomputers excel in.

The need for more powerful machines should be obvious. The current supercomputers already have more than enough work to do. Starting a new project on a current one would require the cessation of the current project. The current supercomputer may not have the processing power to complete a task within a certain amount of time needing a new machine. There are many reasons why a new supercomputer is built.

Of course custom software will have to be written. It's not like you slap Vista on one of these things and go to town. To even remotely make use of the hardware in a supercomputer you have to have custom software written just to interface with the hardware. It's not a pile of 4 or 8 way motherboards. Custom hardware is designed to allow for all the components to talk to each other.

Writing for the Cell may not be easy, but I'm guessing the types of calculations that will be run make heavy use of the architecture of the Cell and the cost of developing the software is worth it.


RE: Is it really necessary?
By JeffDM on 3/2/2007 6:52:32 PM , Rating: 2
The problem is that existing installations get obsolete very quickly, and newer systems are so much more powerful that it's worth the time reoptimizing the code base. A basic supercomputer that's 10 years old might only be as powerful as a high-end desktop computer, and the high end keeps scaling up with more and faster of everything. If it's anything matrix-based, I think the Cell will shine. It has very fast memory, very fast IO and it's designed to crunch through matrix operations like nothing else. Matrix operations aren't that complex computationally, and they are very easy to scale too.


deep crack?
By dare2savefreedom on 2/28/2007 3:50:36 AM , Rating: 2
come on?

is that for real?

How could they name it like that with a straight face?
are they ESL?




RE: deep crack?
By Griswold on 2/28/2007 8:30:11 AM , Rating: 2
I guess they can, if the sole purpose of it was to crack an encryption standard.


In other news..
By hellokeith on 2/28/2007 11:24:53 AM , Rating: 2
Blizzard has announced another expansion pack to World of Warcraft called "WoW: Teh Universe".

Insiders say Blizzard will be upgrading their server environment to support the new virtual universe. Sources have confirmed Blizzard is in discussion with IBM for 6 Roadrunner's, at a cost of 3 months of Blizzard's revenue.




RE: In other news..
By KernD on 2/28/2007 12:05:16 PM , Rating: 2
Lol, nice joke
660 million of revenue in 3 months, they are insanely rich...

But I'm surprised it cost so little, I expected it would cost more for one of these...


intel is way better
By AntDX316 on 3/8/2007 6:54:04 AM , Rating: 2
if they used intels 80 core that does over 1 teraflop they only need 1000 to break the peta limit instead of 16000 opterons




RE: intel is way better
By fxyefx on 3/18/2007 8:03:34 AM , Rating: 2
Perhaps that is something that will happen in the future, but at this time the platform for AMD Opterons and IBM Cells is developed, and the platform for chips with that many cores is not. Keep in mind, the time-frame for this computer is to be up and running within a year's time.


"news"
By freeagle on 2/28/2007 4:49:40 AM , Rating: 2
this is kinda old "news", I've already read about it a few months ago. Can't remember where, though




RE: "news"
By DallasTexas on 3/1/2007 3:37:53 PM , Rating: 1
It is old..and we'll see it again about 6 more times. It's one of those news left overs that get warmed up and served as news.


wow
By nurbsenvi on 2/28/2007 5:40:56 AM , Rating: 2
If I can use it as my rendering rig....
I wonder if it can render Hollywood quality CGs in real time?




Supercomputing history question
By jaybuffet on 3/1/2007 10:55:14 AM , Rating: 2
When in history were supercomputers as fast as the normal desktop nowadays? Would it be true to project that the power of a desktop would be this powerful in that amount of time?




sweet
By yacoub on 3/2/2007 1:27:42 PM , Rating: 2
This will go a long way toward helping DOE conduct simulated nuclear testing since they stopped testing real ones out in Nevada/Utah a few decades ago. =)




Wrong national lab
By lamestlamer on 2/28/2007 4:29:17 AM , Rating: 1
That is LLNL, not Los Alamos. LLNL is tiny, only a square kilometer. Los Alamos is a huge campus.




"People Don't Respect Confidentiality in This Industry" -- Sony Computer Entertainment of America President and CEO Jack Tretton











botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki