backtop


Print 17 comment(s) - last by vol7ron.. on Jan 10 at 10:11 AM


NVIDIA President and CEO Jen-Hsun Huang  (Source: eetimes.com)
New NVIDIA/ARM partnership code-named Project Denver, targets PCs and supercomputers

NVIDIA, makers of the powerful Tegra 2 dual-core processor, yesterday at CES announced a partnership with ARM -- code-named "Project Denver" -- to build ARM-based CPU cores that could power PCs, servers, and supercomputers.

"With Project Denver, we are designing a high-performing ARM CPU core in combination with our massively parallel GPU cores to create a new class of processor," said Jen-Hsun Huang, president and CEO of NVIDIA, in a press release.

In fact, the project entails the NVIDIA CPU running the ARM instruction set, fully integrated on the same chip as the NVIDIA GPU. In addition, NVIDIA has obtained the rights to develop its own CPU cores based on ARM's future processor architecture. While the Tegra 2 combines two Cortex A9, NVIDIA now has also licensed ARM's Cortex A15 for future Tegra models.

"This marks the beginning of the Internet Everywhere era, where every device provides instant access to the Internet, using advanced CPU cores and rich operating systems," Huang said.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By tastyratz on 1/6/2011 8:56:14 AM , Rating: 0
Because history tells me they don't right now. The ION platform was a step in the right direction but gpu's don't seem to follow in the footsteps of modern cpus at all.

This could be a venture that shows benefit towards their graphics card market if they start investing in power efficiency research.




By Da W on 1/6/2011 9:05:47 AM , Rating: 3
you must mean Nvidia


By LordSojar on 1/6/2011 9:15:42 AM , Rating: 5
Power efficiency? Are you serious? Considering Tegra2 is quite possibly the most power efficient platform on the planet (at this time)... it's safe to say nVidia MIGHT know what they are doing...

I, for one, am extremely excited by this announcement. nVidia has a lot of expertise in developing ground up architectures. It's safe to assume the product that is the fruit of this project will be at least decent and not a complete failure.

ION literally has nothing to do with this endeavor. Literally...NOTHING. It's a 9400M chip designed to function with Intel's Atom CPU platform. How is a graphics chip coupled with an x86 CPU even worth mentioning in relation to this announcement? The fact you mentioned AMD is making me think you might have absolutely no clue what you are talking about. I'd like to be wrong in this instance... but I'm afraid I'm not.

nVidia is adjusting to a drastically changing market landscape. This decision is a very risky, but potentially has a huge payoff. To be able to drop Intel and AMD out of the HPC and server sector within the space of 10 years would be jaw-dropping amazing. Do I think they can do it? I think they have the ability to do it; will they? No idea. It's ambitious, high risk and ballsy. It's also very necessary to make this move now, as it has the potential to propel nVidia forward as a company. The next 5 years are going to be VERY interesting.


By omnicronx on 1/6/2011 1:18:43 PM , Rating: 2
Well to be fair, Nvidia is a traditional core licensee, which basically means they use the ARM designs in their native form.

I can pretty much guarantee that they will not be on the forefront much longer when Qualcomm, Samsung or Texas Instruments design their own cores around the ARM architectural.. (i.e when they release their own CortexA9 variants)

Nvidia will certainly have their work cut out for them because of such, and personally I think they will be much more successful in the desktop/mobile space than the server space.. (where things like the Tegra2 mean nothing being a media centric SOC)


By Taft12 on 1/6/2011 11:15:32 AM , Rating: 2
quote:
gpu's don't seem to follow in the footsteps of modern cpus at all.


I don't know if you're talking about desktop GPUs or Tegra's market, but modern desktop GPUs throttle down to use VERY little power compared to previous generations.

The 6970 uses about 20W at idle and the GTX 580 ~30W. Do you think the X850 and 6800 Ultra were capable of this??? There's no reason not to think Nvidia and AMD will continue to push this down in future generations.


By tastyratz on 1/6/2011 2:07:08 PM , Rating: 2
impressive, I was not aware they throttled down that much. I typed Amd instead of nvidia buy accident, but I retract my previous statement


By mindless1 on 1/8/2011 12:48:48 AM , Rating: 2
It's a bit apples:oranges if you consider that a progressive improvement, the power consumption reduction is mainly due to lowering frequency and voltage, something that would have been relatively easy to do years and years ago and would have been if developing the tech had been important in keeping the average power consumption envelope lower on their higher end, high profit parts.

Remember when several hundred dollar AGP video cards used only 40W peak and that was considered a lot at the time?


"I modded down, down, down, and the flames went higher." -- Sven Olsen

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki