backtop


Print 29 comment(s) - last by AlexTRoopeR.. on Feb 7 at 4:17 AM

IBM is currently developing a supercomputer it hopes will be able to deliver 20 petaflops per second

IBM announced ambitious plans to create a new supercomputer that will be 20 times faster than its current Roadrunner supercomputer.  The new supercomputer, dubbed "Sequoia," will operate at a whopping 20 petaflops, and is significantly faster than IBM's previous supercomputers.

The new supercomputer will be used at the Lawrence Livermore National Laboratory in Livermore, California, and will allow researchers to use the powerful computer for simulations of U.S. nuclear weapons.  Lawrence Livermore is using the IBM BlueGene/L system until Sequoia is ready.

The system will be stored and used in a 3,422 sq. ft. building in Livermore -- it will be energy efficient, with IBM expecting it to use 6 megawatts per year, which is equivalent to 500 American homes. 

Sequoia may be able to provide a 40- to 50-fold improvement in the country's ability to provide data, including severe storm forecasting, earthquake predictions and evacuation routes due to national emergency, IBM said in a statement.

The system will use 45nm processors that have up to 16 cores per chip, and will have 1.6 petabytes of memory shared by 1.6 million cores.  It will be 15 times faster than BlueGene/P and have the same footprint with only a "modest" increase in power consumption.

IBM's latest announcement comes just seven months after IBM delivered the fastest supercomputer, Roadrunner, to the U.S. Department of Energy's Los Alamos National Laboratory.  The supercomputer was the first system to break the 1 petaflop barrier, clocking in at 1.026 petaflops.

IBM also is working on other supercomputers that will be used by the Defense Advanced Research Projects Agency (DARPA), and should be available before 2011.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

throwin' code
By codeThug on 2/3/2009 6:15:07 PM , Rating: 2
I find it strange that software is never mentioned in this type of article. Yeah, yeah, I know it's more fun to brag about silicon feeds and speeds than auto-parallelizing compilers.

But, what is 20 petaflops if the inner code loops are sloppy and waste most of the crunch power.




RE: throwin' code
By Fritzr on 2/3/2009 9:18:43 PM , Rating: 3
More computing power is meaningful ... the faster the computer the less optimizing needs to be done by the program designer ... just look at desktop computers.

The bare operating system today cannot run at an acceptable speed (in many cases not at all) on early Intel & Motorola cpus.

There is so much bloat using processing power that it is ridiculous. A 64 bit version of one of the multitasking MS-DOS clones with GEM or GEOS for a GUI, would be a speed demon on a modern entry level computer barely able to run windows. More useful would be an early Linux recoded to use 64 bit addressing, but maintaining the tight fast coding of the designed for i286 Linux. Even in the Linux world bloat is stealing cpu cycles.

It's much easier to use the compiler libraries instead of custom high speed code. When you are doing this for small simple functions you are trading ease of coding for speed. For complex functions where the compiler is hiding a lot of machine specific code variations then you need to look at the library source to make sure that the customized binary isn't loading all the unused code that your version will never use. This is the weakness of generic .dll files.

The science mainframes have always used computing power to overcome slow executables. FORTRAN was developed to make it easy to write a science program, but it was not designed to generate the fastest executables. It is still used today due to the number of legacy applications that are still useful, as well as allowing researchers who have learned FORTRAN to continue writing code without taking time off to learn a new language.

A really good programmer can take a few extra weeks or months to go through the flowcharts and final code and find places where a rewrite will accelerate the program. Researchers will instead allow the program to run a bit slower while they get other work done. Instead of spending the time they would save, saving the time, they buy time on a faster machine and speed up their code the lazy way :P


"Vista runs on Atom ... It's just no one uses it". -- Intel CEO Paul Otellini











botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki