backtop


Print 84 comment(s) - last by dwalton.. on Jul 16 at 4:09 PM

Intel says CUDA will be nothing but a footnote in computer history

Intel and NVIDIA compete in many different ways. The most notable place we see competition between the two companies is in chipset manufacturing. Intel and NVIDIA also compete in the integrated graphics market where Intel’s integrated graphics chips lead the market.

NVIDIA started competing with Intel in the data processing arena with the CUDA programming language. Intel’s Pat Gelsinger, co-general manager of Intel’s Digital Enterprise Group, told Custom PC that NVIDIA’s CUDA programming model would be nothing more than an interesting footnote in the annals of computing history.

According to Gelsinger, programmers simply don’t have enough time to learn how to program for new architectures like CUDA. Gelsinger told Custom PC, “The problem that we’ve seen over and over and over again in the computing industry is that there’s a cool new idea, and it promises a 10x or 20x performance improvements, but you’ve just got to go through this little orifice called a new programming model. Those orifices have always been insurmountable as long as the general purpose computing models evolve into the future.”

The Sony Cell architecture illustrates the point according to Gelsinger. The Cell architecture promised huge performance gains compared to normal architectures, but the architecture still isn’t supported widely by developers.

Intel’s Larrabee graphics chip will be entirely based on Intel Architecture x86 cores says Gelsinger. The reason for this is so that developers can program for the graphics processor without having to learn a new language. Larrabee will have full support for APIs like DX and OpenGL.

NVIDIA’s CUDA architecture is what makes it possible to process complex physics calculations on the GPU, enabling PhysX on the GPU rather than CPU.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Intel is wrong as usual
By hanishkvc on 7/2/2008 1:51:33 PM , Rating: 4
Hi,

First
---------
First of for general purpose computing, intel was wrong when it originally abondoned X86 and went with Ia64 (itanium) for 64 bit architecture.

However wrt Special/Compute intense applications Intel is again wrong when it is proposing x86 instead of a effecient Vector capable SPs in these array of processors.

Next
--------

The complexity in Compute intense applications is more to do with how to efficiently utilize the processing power available in the array of cores (which includes processing as well as data access and their grouping). And thus this issue remains independent of whether the processing cores are x86 or some thing else like in NVidia or ATI SPs.

And even in x86 cores if SIMD elements are there they require to be triggered
a) either in assembly or
b) using compiler intrensics or
c) by using modified/updated languages or compilers

And the same applies for NVidia/ATI SPs.

Conclusion
--------------
What the intel guy is talking is FUD which requires to be kicked in the butt.




RE: Intel is wrong as usual
By jjunos on 7/2/2008 3:16:43 PM , Rating: 2
quote:
First of for general purpose computing, intel was wrong when it originally abondoned X86 and went with Ia64 (itanium) for 64 bit architecture.


Doesn't that actually put credence to what the Intel guy said? Intel tried the specialized programming model...and failed?

quote:
The complexity in Compute intense applications is more to do with how to efficiently utilize the processing power available in the array of cores (which includes processing as well as data access and their grouping). And thus this issue remains independent of whether the processing cores are x86 or some thing else like in NVidia or ATI SPs.


true, but if I had to put money on who would do it better (better efficiency, better tuning, etc) wouldn't you want to go with the programming model that's been around for a signficantly longer time that CUDA and has the developers who are accustom to the model itself?

Not saying Intel is entirely right, CUDA has shown to be pretty adept at certain areas...


RE: Intel is wrong as usual
By soydeedo on 7/2/2008 3:40:46 PM , Rating: 2
quote:
quote:
First of for general purpose computing, intel was wrong when it originally abondoned X86 and went with Ia64 (itanium) for 64 bit architecture.
Doesn't that actually put credence to what the Intel guy said? Intel tried the specialized programming model...and failed?

No, that was his point in that it was the wrong direction for general computing, but for specialized purposes you want to squeeze every last bit of performance out of the architecture. Granted, if there are minimal performance losses for making it a bit more like an established model then it may be worth the trade-off.

On that note, isn't CUDA a bit like a handicapped version of C? So it's not all that much of a departure then, eh?


RE: Intel is wrong as usual
By allajunaki on 7/3/2008 12:58:38 AM , Rating: 2
Well,
Whats Ironic? Intel still sells Itanium2. And they still have development team working on them.
EPIC architecture (Itanium2) is designed with same intentions as IBM Cell or nvidia and ATI with their GPU based approach is doing.. Attempting a revolution (instead of an evolution).

And if u read about Itanium, Intel is still convinced that itanium has a future.

And whats makes it funny (for me atleast) is that I code for Itanium based Servers... :)


By decapitator666 on 7/6/2008 7:01:49 AM , Rating: 2
In my eyes The itanium is to computing what duke nukem 2 is to games.. the continuous promise for the future.. Over 10 years of development constant improvements and reworks.. ;-)


RE: Intel is wrong as usual
By ET on 7/3/2008 1:49:41 AM , Rating: 2
I agree and disagree.

I agree that the paradigm shift is needed anyway (and is already happening in the multi-core world).

On the other hand, there's also the platform shift, and here Intel has an advantage. CPU's already have SIMD (in the form of SSE) and if intel sticks to the same x86 (or x64) machine code, then it'd be possible to take advantage of the power of these processors immediately, without a need of learning anything new. That would likely mean taking very bad advantage of them, but it'd still be more than what's possible given a completely different architecture.

I think that's where Intel's advantage lies. Tools are extremely important for development, and if Larrabee allows using standard tools, then that'd be an advantage for Intel.


"Mac OS X is like living in a farmhouse in the country with no locks, and Windows is living in a house with bars on the windows in the bad part of town." -- Charlie Miller

Related Articles
GeForce 8 To Get Software PhysX Engine
February 15, 2008, 10:33 AM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki