Print 32 comment(s) - last by Sunner.. on Oct 5 at 4:08 AM

Fermi supercomptuer will be ten times more powerful than today's fastest supercomptuer

NVIDIA was at the forefront of the push to move high-performance computing from CPUs to GPUs in scientific and other areas of research. As it turns out, the GPU is a very effective tool for running calculations historically run on the CPU.

NVIDIA announced its new Fermi architecture at its GPU Technology Conference recently. The new architecture was designed from the ground up to enable a new level of supercomputing using GPUs rather than CPUs. At the conference, Oak Ridge National Laboratory (ORNL) associate lab director for Computing and Computational Sciences, Jeff Nichols, announced that ORNL would be building a next generation supercomputer using the Fermi architecture.

The new supercomputer is expected to be ten times faster than today's fastest supercomputer. Nichols said that Fermi would enable substantial scientific breakthroughs that would have been impossible without the technology.

Nichols said, "This would be the first co-processing architecture that Oak Ridge has deployed for open science, and we are extremely excited about the opportunities it creates to solve huge scientific challenges. With the help of NVIDIA technology, Oak Ridge proposes to create a computing platform that will deliver exascale computing within ten years."

ORNL also announced at the conference that it would create a Hybrid Multicore Consortium with the goal of working with developers of major scientific codes to prepare the applications for the next generation of supercomputers using GPUs.

“The first two generations of the CUDA GPU architecture enabled NVIDIA to make real in-roads into the scientific computing space, delivering dramatic performance increases across a broad spectrum of applications,” said Bill Dally, chief scientist at NVIDIA. “The ‘Fermi’ architecture is a true engine of science and with the support of national research facilities such as ORNL, the possibilities are endless.”

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

A glimpse of the future?
By captchaos2 on 10/2/2009 10:46:02 AM , Rating: 2
This is good news for nvidia. When can I expect to buy an Nvidia processor for an Nvidia mobo with a triple SLI vid card setup?

RE: A glimpse of the future?
By Jedi2155 on 10/2/2009 10:53:56 AM , Rating: 3
I don't believe nVidia will venture into the consumer level desktop CPU space anytime soon. As the GPU's still far too different to run the vast numbers of x86 applications available.

Their research into this field is probably primarily based on their fear of Larabee however it seems like its Intel/AMD who should be more afraid of nVidia as it eats into their high end CPU sales from government.


RE: A glimpse of the future?
By Yojimbo on 10/2/2009 11:21:20 AM , Rating: 2
Their research into this field is probably primarily based on their fear of Larabee however it seems like its Intel/AMD who should be more afraid of nVidia as it eats into their high end CPU sales from government.

Their research into the area has nothing to do with Larabee. It started years ago before Intel was talking about Larabee. I remember seeing a paper by nvidia and some university about gpu computing at least 5 years ago. If anything, Larabee is a response to Intel's fears about their research in this area.

RE: A glimpse of the future?
By F41TH00 on 10/2/2009 1:24:58 PM , Rating: 2
You are correct. Nvidia has long researched what Intel called Larabee approximately 5-7 yrs ago. It's known as Cuda.

RE: A glimpse of the future?
By habibo on 10/4/2009 1:42:01 AM , Rating: 2
CUDA was made public only 3 years ago - but Larrabee is most certainly a response to CUDA and not the other way around.

RE: A glimpse of the future?
By nafhan on 10/2/2009 11:53:00 AM , Rating: 2
If the fermi architecture is Turing complete, they could run an x86 emulator on it... :)

Seriously though, x86 isn't the only game in town. My guess is they'll probably continue to build on their relationship with ARM by improving and evolving Tegra. Keep in mind that Google and Apple both have OS's that run on ARM.

RE: A glimpse of the future?
By MatthiasF on 10/2/2009 1:48:22 PM , Rating: 2
Don't forget about MIPS! They still have some very nice products that could be used in these situations.

RE: A glimpse of the future?
By surt on 10/4/2009 1:32:29 PM , Rating: 2
An x86 emulator on fermi would be painfully slow as a cpu, because unlike a gpu, a cpu often wants to do something other than multiply floating point numbers.

"Mac OS X is like living in a farmhouse in the country with no locks, and Windows is living in a house with bars on the windows in the bad part of town." -- Charlie Miller
Related Articles

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki