backtop


Print 62 comment(s) - last by rcreyes.. on Nov 3 at 8:34 PM


China now is home to the world's most powerful supercomputer, the Tianhe-1A. It just kicked America's Jaguar supercomputer out of first place.  (Source: NVIDIA)

The supercomputer uses 7,168 NVIDIA Tesla M2050 GPUs.  (Source: NVIDIA)
System has 2.507 petaflops of computer power, draws 4 MW

NVIDIA has plenty to worry about in the consumer segment as it find itself yet again a generation behind AMD's latest graphics cards.  However, the company may simply be quietly divesting itself of its consumer market share by instead focusing on commercial GPU computing sales.

The graphics processor maker revealed today at HPC 2010 China an incredible new supercomputer, built using NVIDIA's GPUs which support CUDA, a C-driven technology that allows for the implementation of parallel computing code on the GPU.  The new supercomputer is named Tianhe-1a.  It is located at the National University of Defense Technology (NUDT) in Tianjin, China and is fully operational.

With a total computing power of 2.507 petaflops, as determined by the LINPACK benchmark which solves a dense system of linear equations, China's new supercomputer is the most powerful one in the world.

And NVIDIA's real bragging rights come when the power consumption is discussed.  By using GPUs instead of purely CPUs to fuel its calculations, the installation's power footprint is cut from an estimated 12 MW to 4.04 MW, saving enough electricity to power 5,000 homes a year.

Guangming Liu, chief of National Supercomputer Center in Tianjin comments, "The performance and efficiency of Tianhe-1A was simply not possible without GPUs.  The scientific research that is now possible with a system of this scale is almost without limits; we could not be more pleased with the results."

The supercomputer is composed of 7,168 NVIDIA Tesla M2050 GPUs and 14,336 CPUs.  If it were only using CPUs it would also require twice the floor space, as it would require 50,000 CPUs to match the combined performance of the GPUs+CPUs.

China is offering open access to computing time on the supercomputer, but it's unclear whether Chinese researchers will be given preference over foreign ones.  

With the addition of the new supercomputer, China now has two of the three most powerful supercomputers in the world.  The third most powerful one -- previously in second place -- was the Nebulae supercomputer located in Shenzhen, which also uses NVIDIA's Tesla GPUs.  It has a peak capacity of 1.271 petaflops in LINPACK.

Tianhe-1a kicks an American computer out of the top spot.   The Jaguar supercomputer built by Cray at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee is now only the world's
second most powerful computer.  That machine, powered by its thousands of Opteron cores, posted 1.75 petaflop LINPACK performance.

Note: 
The listed computing marks (in petaflops) were determined in LINPACK, which is recognized as a fair means of determining total computing power of a supercomputer.  This mark is different, though from the 
theoretical computing peak.  For example Nebulae has a higher theoretical computing mark that Jaguar, but in testing Jaguar comes out on top.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By 91TTZ on 10/28/2010 2:05:02 PM , Rating: 3
The article puts too much emphasis on the 7,168 NVidia GPUs, but lists no details about the 14,336 Intel Xeon CPUs.




By Omega215D on 10/28/2010 6:50:45 PM , Rating: 2
All manufactured at Foxconn!


By Omega215D on 10/28/2010 6:51:32 PM , Rating: 2
err... other than the intel chips.


By sukmidik on 10/29/2010 6:33:24 PM , Rating: 3
The author is obviously trying to put a bulleye's on Nvidia for the public. Intel provided thousands of CPU's to build this supercomtuer to dethrone america's supercomputer crown, yet their is no mention of Intel anywhere in the article. Nvidia is a successful hi-tech company co-founded by a asian-american that added to national GDP and provided jobs for thousands.


By DaSpikester on 11/1/2010 7:50:01 PM , Rating: 2
That's because the Xeons don't contribute significantly to the computational power of the machine: they play a support role for the M2050 Tesla GPUs, which do the lifting.

A single M2050 has 448 cores and over 500 gigaflops of double-precision floating point performance - that's so much more than a Xeon that it doesn't make sense to slow the system down by trying to use the Xeons for computation. Instead, the Xeons are used in a support role for things like data access and memory transfers to keep the GPUs fed.

The key to how well this Chinese machine will actually perform will be how well a given application can balance the relatively less parallel data access tasks serviced by the Xeons with the massively parallel computation done in the GPUs. If the Xeons can keep the GPUs saturated it will scream. If not, well, welcome to Amdahl's Law...


"Let's face it, we're not changing the world. We're building a product that helps people buy more crap - and watch porn." -- Seagate CEO Bill Watkins














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki