backtop


Print 84 comment(s) - last by dwalton.. on Jul 16 at 4:09 PM

Intel says CUDA will be nothing but a footnote in computer history

Intel and NVIDIA compete in many different ways. The most notable place we see competition between the two companies is in chipset manufacturing. Intel and NVIDIA also compete in the integrated graphics market where Intel’s integrated graphics chips lead the market.

NVIDIA started competing with Intel in the data processing arena with the CUDA programming language. Intel’s Pat Gelsinger, co-general manager of Intel’s Digital Enterprise Group, told Custom PC that NVIDIA’s CUDA programming model would be nothing more than an interesting footnote in the annals of computing history.

According to Gelsinger, programmers simply don’t have enough time to learn how to program for new architectures like CUDA. Gelsinger told Custom PC, “The problem that we’ve seen over and over and over again in the computing industry is that there’s a cool new idea, and it promises a 10x or 20x performance improvements, but you’ve just got to go through this little orifice called a new programming model. Those orifices have always been insurmountable as long as the general purpose computing models evolve into the future.”

The Sony Cell architecture illustrates the point according to Gelsinger. The Cell architecture promised huge performance gains compared to normal architectures, but the architecture still isn’t supported widely by developers.

Intel’s Larrabee graphics chip will be entirely based on Intel Architecture x86 cores says Gelsinger. The reason for this is so that developers can program for the graphics processor without having to learn a new language. Larrabee will have full support for APIs like DX and OpenGL.

NVIDIA’s CUDA architecture is what makes it possible to process complex physics calculations on the GPU, enabling PhysX on the GPU rather than CPU.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: Intel Is Scared
By DingieM on 7/3/2008 4:37:17 AM , Rating: -1
Your post is nVidia PR.

GT200 is a giant: too big, too hot, too expensive and relatively underperforming.

The last news for nVidia is bad: the GTX260 and especially GTX280 isn't selling well and the great HD4850 is selling like hot cakes. The even greater HD4870 will be very popular too.
This new ATI stuff is more powerful and feature rich than anyone thought it would be. The word on the street is that nVidia is scared!

Intel is right for this part: adapting/enhancing X86 with speedy SIMD's on a highly parallel architecture will benefit us all. General and dynamic calculation hardware to solve all mathematical challenges.
Raytracing will be a really needed application on this processor solution, and even ATI has it already on its HD4000 generation: support for full raytracing pipeline via DX9.

I heard a rumor that the power of Larrabee was already 40% faster than the newest generation of ATI and nVidia...

Intel is not scared, not scared at all. First it has humongous amounts of resources and second it holds the key to allow licences or retract licenses. Powerfull negotiation position.


"It's okay. The scenarios aren't that clear. But it's good looking. [Steve Jobs] does good design, and [the iPad] is absolutely a good example of that." -- Bill Gates on the Apple iPad

Related Articles
GeForce 8 To Get Software PhysX Engine
February 15, 2008, 10:33 AM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki