backtop


Print 32 comment(s) - last by Sunner.. on Oct 5 at 4:08 AM

Fermi supercomptuer will be ten times more powerful than today's fastest supercomptuer

NVIDIA was at the forefront of the push to move high-performance computing from CPUs to GPUs in scientific and other areas of research. As it turns out, the GPU is a very effective tool for running calculations historically run on the CPU.

NVIDIA announced its new Fermi architecture at its GPU Technology Conference recently. The new architecture was designed from the ground up to enable a new level of supercomputing using GPUs rather than CPUs. At the conference, Oak Ridge National Laboratory (ORNL) associate lab director for Computing and Computational Sciences, Jeff Nichols, announced that ORNL would be building a next generation supercomputer using the Fermi architecture.

The new supercomputer is expected to be ten times faster than today's fastest supercomputer. Nichols said that Fermi would enable substantial scientific breakthroughs that would have been impossible without the technology.

Nichols said, "This would be the first co-processing architecture that Oak Ridge has deployed for open science, and we are extremely excited about the opportunities it creates to solve huge scientific challenges. With the help of NVIDIA technology, Oak Ridge proposes to create a computing platform that will deliver exascale computing within ten years."

ORNL also announced at the conference that it would create a Hybrid Multicore Consortium with the goal of working with developers of major scientific codes to prepare the applications for the next generation of supercomputers using GPUs.

“The first two generations of the CUDA GPU architecture enabled NVIDIA to make real in-roads into the scientific computing space, delivering dramatic performance increases across a broad spectrum of applications,” said Bill Dally, chief scientist at NVIDIA. “The ‘Fermi’ architecture is a true engine of science and with the support of national research facilities such as ORNL, the possibilities are endless.”



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: A glimpse of the future?
By Yojimbo on 10/2/2009 11:21:20 AM , Rating: 2
quote:
Their research into this field is probably primarily based on their fear of Larabee however it seems like its Intel/AMD who should be more afraid of nVidia as it eats into their high end CPU sales from government.


Their research into the area has nothing to do with Larabee. It started years ago before Intel was talking about Larabee. I remember seeing a paper by nvidia and some university about gpu computing at least 5 years ago. If anything, Larabee is a response to Intel's fears about their research in this area.


RE: A glimpse of the future?
By F41TH00 on 10/2/2009 1:24:58 PM , Rating: 2
You are correct. Nvidia has long researched what Intel called Larabee approximately 5-7 yrs ago. It's known as Cuda.


RE: A glimpse of the future?
By habibo on 10/4/2009 1:42:01 AM , Rating: 2
CUDA was made public only 3 years ago - but Larrabee is most certainly a response to CUDA and not the other way around.


"Game reviewers fought each other to write the most glowing coverage possible for the powerhouse Sony, MS systems. Reviewers flipped coins to see who would review the Nintendo Wii. The losers got stuck with the job." -- Andy Marken

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki