backtop


Print 84 comment(s) - last by dwalton.. on Jul 16 at 4:09 PM

Intel says CUDA will be nothing but a footnote in computer history

Intel and NVIDIA compete in many different ways. The most notable place we see competition between the two companies is in chipset manufacturing. Intel and NVIDIA also compete in the integrated graphics market where Intel’s integrated graphics chips lead the market.

NVIDIA started competing with Intel in the data processing arena with the CUDA programming language. Intel’s Pat Gelsinger, co-general manager of Intel’s Digital Enterprise Group, told Custom PC that NVIDIA’s CUDA programming model would be nothing more than an interesting footnote in the annals of computing history.

According to Gelsinger, programmers simply don’t have enough time to learn how to program for new architectures like CUDA. Gelsinger told Custom PC, “The problem that we’ve seen over and over and over again in the computing industry is that there’s a cool new idea, and it promises a 10x or 20x performance improvements, but you’ve just got to go through this little orifice called a new programming model. Those orifices have always been insurmountable as long as the general purpose computing models evolve into the future.”

The Sony Cell architecture illustrates the point according to Gelsinger. The Cell architecture promised huge performance gains compared to normal architectures, but the architecture still isn’t supported widely by developers.

Intel’s Larrabee graphics chip will be entirely based on Intel Architecture x86 cores says Gelsinger. The reason for this is so that developers can program for the graphics processor without having to learn a new language. Larrabee will have full support for APIs like DX and OpenGL.

NVIDIA’s CUDA architecture is what makes it possible to process complex physics calculations on the GPU, enabling PhysX on the GPU rather than CPU.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: pwnd
By FITCamaro on 7/2/2008 1:43:59 PM , Rating: 2
This has nothing to do with gaming. CUDA isn't for gaming. It's for running general purpose code on a GPU. Larrabee will be a GPU but it will use many x86 cores instead of a single large die thats a more dedicated chip for graphics processing. This will allow it to easily run general purpose code other than graphics because programmers don't have to learn a new language.

DX10 has nothing to do with this either. There will be a DX11 which introduces new functions to add to the current DX API. And many more new DX specifications after it.


RE: pwnd
By nosfe on 7/2/2008 1:52:08 PM , Rating: 3
it also means that it'll suck at gaming(probably) because x86 isn't the end all be all of chip architectures, probably why graphics cards haven't ever used it


RE: pwnd
By FITCamaro on 7/2/2008 2:23:08 PM , Rating: 3
That was not my point, hence why I didn't say it would be good at gaming. It might be. It'll certainly be better than an integrated chipset. My point WAS that Larrabee will allow general purpose tasks on a GPU without developers having to learn a new language.


RE: pwnd
By omnicronx on 7/2/2008 2:49:55 PM , Rating: 2
Which pretty much totally negates the purpose doesn't it?
I really do not see how you can use current x86 instruction sets to come even close to even the most basic CUDA code can do. Intel will have to add instruction sets (which they already plan to do) just to become somewhat competitive with Intel and AMD offerings.

Intel also makes it seem like all programmers out there one day will be able program for this GPU without a huge learning curve just because it uses the x86 instruction set. Well this is just not the case, there will be a huge learning curve regardless of what architecture is used, if that means a little bigger curve for something that is x fold better, than so be it. We are talking high end market here, in which there will always be a limited market and chances are the programmers that are employed to do these jobs will have extensive experience, and will know what they are doing.

This is CUDA differs from say Sony's Cell processor, its not meant to be a mainstream product, its meant for those who want to squeeze every extra bit of performance they can from the available hardware, regardless of the learning curve.


RE: pwnd
By Mitch101 on 7/2/2008 4:02:09 PM , Rating: 2
The biggest mistake AMD/ATI and NVIDIA can make is saying I do not see how Intel can use current x86 instruction sets to come even close to CUDA etc. Arrogance will be their demise and NVIDIA is full of it.

Cell was a wake up call to AMD/Intel that maybe they shouldnt go the huge monolithic way. NVIDIA just recently demonstrated huge monolithic parallelism and while the chip is a great feat the significantly smaller ATI chip is nipping at its heels and could very well topple it when the next one comes out.

ATI/AMD got the hint with cell and is going with smaller more efficient cores in parallel and most recently were starting to see this pay off.

Intel with their chip is essentially doing something similar to cell however its x86 based so developers can program for it immediately plus it wont suffer from lack of parallel execution like a CPU or cross communication. If they throw in an extensions of SSE in the form of a large number of 3d graphics enhancements with the parallel capability of the new CPU with a touch of physics they will have something good. First gen probably wont be the killer but it will scale to be a killer.

Dont forget Intel hired that ray trace guy whom I would bet is developing a game engine for the new chip. Intel can afford to give the engine away for free to get game companies on board and to sell chips.

Lastly lets throw in the word WINTEL - Windows and Intel are buddies much more thanks to the way NVIDIA tried to kick Microsoft in the Jimmies with their X-Box GPU chips. Windows DX11 might benefit Intel a lot more than NVIDIA again its what they don't see coming that they should be afraid of. Dont forget most of Vista's problems were caused by NVIDIA drivers.

An NVIDIA price war with ATI is nothing compared to what Intel can do to NVIDIA. Die shrink this!

I used to really pull for NVIDIA but their bully tactics, arrogance, and denying of DX10.1 because of TWIMTBP developers base because they still dont have it and everyone else should suffer because means they are going to get what they deserve in the end.

Watch the movie 300 and replace NVIDIA with the Spartans. They will put up a good fight but it wont last against a significantly larger army. Intel might not have Spartans on the first round of their first real attempt at graphics but eventually they will. They still have to deal with ATI in the war also.


RE: pwnd
By omnicronx on 7/2/2008 4:59:33 PM , Rating: 2
You seemed to have missed the biggest point of my post, you can't compare Cell architecture which was designed for mainstream consumer use, with a GPGPU only architecture such as CUDA. Where as CUDA will only be used in an environment in which you want to squeeze every extra bit of performance, and to tell you the truth, I really don't see programmers having a problem. And just so you know, CUDA is incredibly similar to stripped down version of C, so its not going to be light and day here either.

I also think over estimate the power of Intel. A line of GPUS will require different new Fabs, as I don't see them just taking over current CPU fabs, especially when the die sizes are totally different in the GPU world. Its not like Intel can just use all of its resources and just disregard their CPU line. Anyway you look at it, Intel is going to be playing catchup, and personally I don't think that a GPU line that scales across 3 separate platforms (GPU, Mobile GPU, GPGPU) is the answer. I would only expect AMD and NVIDIA to turn on the afterburners to distance themselves from an already distant Intel.


RE: pwnd
By Mitch101 on 7/2/2008 5:23:54 PM , Rating: 2
Sure you can a it all comes down to IPC but you have to consider what kind of IPC's it will process.

Intel's wont require a new fab or anything special there is no magic in making a chip. Somehow I think Intel's engineers and better at chip fabbing than NVIDIA.

To underestimate Intel would be the kiss of death. I would believe you over estimate NVIDIA.

If AMD had any kind of after burners they would have used it on the CPU. These miracle afterburners are just imaginary thoughts. NVIDIA has no afterburners either otherwise they wouldn't be worried about ATI's R770 is it?

However Intel does have magic afterburners. Probably 32nm ones soon. Where NVIDIA doesnt own any and must settle for 55nm ones.

Even if Intel doesn't take the crown they only need to get to the mainstream level and they can kill NVIDIA with price.


RE: pwnd
By Mitch101 on 7/2/2008 5:29:53 PM , Rating: 2
Nvidia expects lower second-quarter revenue, gross margin

http://www.marketwatch.com/News/Story/Story.aspx?g...

Even a god king can bleed.


RE: pwnd
By encia on 7/2/2008 11:46:31 PM , Rating: 2
Refer to http://www.tgdaily.com/content/view/38145/135/
"Watch out, Larrabee: Radeon 4800 supports a 100% ray-traced pipeline".

The Transformers teaser trailers were raytraced on a GPU in real time.


RE: pwnd
By Mitch101 on 7/3/2008 10:19:53 AM , Rating: 2
That's something not many review sites even mentioned. When I read it I wondered why no one explored it further.


RE: pwnd
By omnicronx on 7/2/2008 2:28:42 PM , Rating: 2
It sure does have to do with gaming, the Larrabee is not only GPGPU, it will be Intels next discrete graphics card. Really this idea seems stupid to me, the entire point of having different instruction sets for GPU's is that x86 architecture is just plain not very efficient. In 1996 this may have been a good idea, as 3D acceleration was in its infency, but today, there is no such programmer difficulty that intel is suggesting, in fact it may very well be harder to go backwards at this point for something much less efficient.

They also seem to leave out of the article that Larrabee will only be able to process information in order, which could further complicate things. (As PPC users would know, in order processing is a large bottleneck.)

While I do agree on the GPGPU front that x86 programming may be much easier than Nvidias CUDA, Intel is trying to make it out as though this is going to revolutionize the GPU market, when in reality its going to make it a hell of a lot more complicated.

If you guys really think this is a good idea, wait until intel starts implementing the extended SIMD (similar to SEE)instruction set that nobody else is going to have other than Intel (which is pretty pointless as AMD and NVIDIA own the gaming market, its going to be pretty hard to sway programmers to use these instructions when almost all current gaming GPUS belong to AMD and NVIDIA, i.e why code for a tiny percentage of the gaming population), so much for making things easier...


RE: pwnd
By nafhan on 7/2/2008 2:43:16 PM , Rating: 1
Just to comment on your "in order processing is a huge bottleneck" comment.
That's only true if you have a single in order core. With multiple in order cores working on seperate threads, it's not as much of an issue. For the most part, GPU's have been in order, and I think Intel's Atom processor is in order as well. In order processors provide significant transistor and power savings over their out of order counterparts.


RE: pwnd
By Elementalism on 7/2/2008 2:51:28 PM , Rating: 2
afaik Itanium is also in order.


RE: pwnd
By omnicronx on 7/2/2008 3:06:47 PM , Rating: 3
quote:
Just to comment on your "in order processing is a huge bottleneck" comment.
That's only true if you have a single in order core. With multiple in order cores working on seperate threads, it's not as much of an issue
I really can not agree with you, you brought up the atom so I will use it as an example. The current Intel 1.6GHZ atom can barely compete with the old celeron 1.2ghz, in which the celeron actually beats it out on most tests. Although this is a single core processor, Intel had to bring back a new implementation of hyperthreading just to bring up the performance to somewhat of a respectable level. So even with multiple threads being excecuted at the same time, performance was at least 1/3 below what a 3 year old out of order processor can do.

I also understand about the power savings with in order processing, but really in a GPGPU who cares? Intel is trying to come out and say they have the holy grail of GPUS that can do just about anything from laptop to desktop to high end GPGPU computing, when in reality they have come up with a unified architecture between all three with one small hiccup, its seems to be far less efficient in two of those fields. (desktop and GPGPU markets)

The way I see it, an all in one solution has never been as good as a standalone product that curtails to the certain area or market. I do give the nod in the fact it seems they have found a way to unify their archecture along all of its lines, but in the end will this be better for Intel or for the consumer, my guess is the later, but what do I know ;)
In the end, only time will tell.


RE: pwnd
By encia on 7/2/2008 11:29:28 PM , Rating: 2
Run Swiftshader 2.01 on Intel Atom (or Intel Core 2 Quad@3Ghz) vs ATI Radeon 9600. Tell me one which is faster in DX9b.


RE: pwnd
By sa365 on 7/3/2008 8:40:46 AM , Rating: 2
Who now thinks AMD's purchase of ATI was a bad move?


"Can anyone tell me what MobileMe is supposed to do?... So why the f*** doesn't it do that?" -- Steve Jobs

Related Articles
GeForce 8 To Get Software PhysX Engine
February 15, 2008, 10:33 AM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki