Print 84 comment(s) - last by dwalton.. on Jul 16 at 4:09 PM

Intel says CUDA will be nothing but a footnote in computer history

Intel and NVIDIA compete in many different ways. The most notable place we see competition between the two companies is in chipset manufacturing. Intel and NVIDIA also compete in the integrated graphics market where Intel’s integrated graphics chips lead the market.

NVIDIA started competing with Intel in the data processing arena with the CUDA programming language. Intel’s Pat Gelsinger, co-general manager of Intel’s Digital Enterprise Group, told Custom PC that NVIDIA’s CUDA programming model would be nothing more than an interesting footnote in the annals of computing history.

According to Gelsinger, programmers simply don’t have enough time to learn how to program for new architectures like CUDA. Gelsinger told Custom PC, “The problem that we’ve seen over and over and over again in the computing industry is that there’s a cool new idea, and it promises a 10x or 20x performance improvements, but you’ve just got to go through this little orifice called a new programming model. Those orifices have always been insurmountable as long as the general purpose computing models evolve into the future.”

The Sony Cell architecture illustrates the point according to Gelsinger. The Cell architecture promised huge performance gains compared to normal architectures, but the architecture still isn’t supported widely by developers.

Intel’s Larrabee graphics chip will be entirely based on Intel Architecture x86 cores says Gelsinger. The reason for this is so that developers can program for the graphics processor without having to learn a new language. Larrabee will have full support for APIs like DX and OpenGL.

NVIDIA’s CUDA architecture is what makes it possible to process complex physics calculations on the GPU, enabling PhysX on the GPU rather than CPU.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: pwnd
By mfed3 on 7/2/2008 1:29:43 PM , Rating: 0
agreed. i never understood the point of changing architectures and creating exclusive models when x86 is 99% of all desktop oriented code out there. instead of manufacturing these symmetrical cpus with 4-8 of the exact same core, they should change up the architecture and insert more floating point / math units while keeping it x86.

RE: pwnd
By Aberforth on 7/2/2008 1:45:17 PM , Rating: 2
well, Larrabee can do over trillion FLOPS/S.

RE: pwnd
By nosfe on 7/2/2008 1:49:35 PM , Rating: 3
i think that you mean gazillions of bazillions/second, right?

RE: pwnd
By Aberforth on 7/2/2008 1:51:12 PM , Rating: 2
that's right. They said so last year at IDF.

RE: pwnd
By comc49 on 7/2/2008 1:54:16 PM , Rating: 2
um 4850 can do teraflop and if larrabee's power compsumption is high as the rumor say it should have more flops

RE: pwnd
By psychobriggsy on 7/2/2008 4:42:01 PM , Rating: 2
Larrabee is only going to be remembered for one type of FLOP, and it has nothing to do with floating point mathematics.

4850 on 55nm does a teraflop today, for $200. OpenCL will become the standard language for programming these devices.

Larrabee is not here today, only suggested to do a teraflop, is anchored to using an old ISA that is simply not relevant when writing NEW code for parallel systems. Programmers aren't programming x86 directly these days, OSes and apps are easily portable between architectures, if the will is there.

I don't think it is a stretch to assume that by the time Larrabee is available with working drivers (another failing of Intel when it comes to graphics) that 2 teraflops will be standard on AMD and nVidia cards for $200. Will Intel sell Larrabee based cards for $100? They have the production capacity, but I don't know if they'd sell, especially if there are early driver issues.

RE: pwnd
By Elementalism on 7/2/2008 2:48:02 PM , Rating: 2
Special purposes. Which is how we arrived at a GPU in the first place. Before 1996 a true 3d accelerator in the consumer space for video games was non-existent. All 3d was done in software via the CPU. The result was slow slow slow. During the switchover from software to hardware. All one had to do was pop in a Voodoo with a game that supported Glide and you got better visuals and about 10x the performance.

x86 has its limitations. Not terribly good as parallel tasks like a GPU. The Athlon had 9 execution units but could barely crank out 1 instruction per cycle.

RE: pwnd
By Clauzii on 7/2/2008 3:52:07 PM , Rating: 2
The Athlon was theoretically 3 instructions pr. cycle. Practically 2 was achieved.

RE: pwnd
By encia on 7/2/2008 11:25:10 PM , Rating: 2
You can slowly "run" DX8 and DX9b titles on modern X86 CPU (e.g. Intel Core 2 Quad @3Ghz) via SwiftShader 2.0.

"It seems as though my state-funded math degree has failed me. Let the lashings commence." -- DailyTech Editor-in-Chief Kristopher Kubicki
Related Articles
GeForce 8 To Get Software PhysX Engine
February 15, 2008, 10:33 AM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki