backtop


Print 84 comment(s) - last by dwalton.. on Jul 16 at 4:09 PM

Intel says CUDA will be nothing but a footnote in computer history

Intel and NVIDIA compete in many different ways. The most notable place we see competition between the two companies is in chipset manufacturing. Intel and NVIDIA also compete in the integrated graphics market where Intel’s integrated graphics chips lead the market.

NVIDIA started competing with Intel in the data processing arena with the CUDA programming language. Intel’s Pat Gelsinger, co-general manager of Intel’s Digital Enterprise Group, told Custom PC that NVIDIA’s CUDA programming model would be nothing more than an interesting footnote in the annals of computing history.

According to Gelsinger, programmers simply don’t have enough time to learn how to program for new architectures like CUDA. Gelsinger told Custom PC, “The problem that we’ve seen over and over and over again in the computing industry is that there’s a cool new idea, and it promises a 10x or 20x performance improvements, but you’ve just got to go through this little orifice called a new programming model. Those orifices have always been insurmountable as long as the general purpose computing models evolve into the future.”

The Sony Cell architecture illustrates the point according to Gelsinger. The Cell architecture promised huge performance gains compared to normal architectures, but the architecture still isn’t supported widely by developers.

Intel’s Larrabee graphics chip will be entirely based on Intel Architecture x86 cores says Gelsinger. The reason for this is so that developers can program for the graphics processor without having to learn a new language. Larrabee will have full support for APIs like DX and OpenGL.

NVIDIA’s CUDA architecture is what makes it possible to process complex physics calculations on the GPU, enabling PhysX on the GPU rather than CPU.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Mixed Metaphor
By EarthsDM on 7/2/2008 1:07:13 PM , Rating: 4
quote:
The problem that we’ve seen over and over and over again in the computing industry is that there’s a cool new idea, and it promises a 10x or 20x performance improvements, but you’ve just got to go through this little orifice called a new programming model. Those orifices have always been insurmountable as long as the general purpose computing models evolve into the future.

- Patrick P. Gelsinger

Whether or not Dr. Gelsinger is right, that is a terrible mixed metaphor.




RE: Mixed Metaphor
By deeznuts on 7/2/2008 1:10:43 PM , Rating: 3
Well since your on metaphors, I'll go ahead and note this:

quote:
Intel says CUDA will be noting but a footnote in computer history


Normally I wouldn't care, but this is the first sentence in the blog/article and as such should have been correct ...


RE: Mixed Metaphor
By nosfe on 7/2/2008 1:54:13 PM , Rating: 3
ain't noting wrong with that


RE: Mixed Metaphor
By drando on 7/2/2008 2:01:19 PM , Rating: 4
Not that I disagree with you; I just thought of this when reading your comment:

Aoccdrnig to a rscheearch at an Elingsh uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer is at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is bcuseae we do not raed ervey lteter by it slef but the wrod as a wlohe.


RE: Mixed Metaphor
By TheDoc9 on 7/2/2008 3:11:02 PM , Rating: 3
nice, pissed me off to read it tuhogh.


RE: Mixed Metaphor
By Clauzii on 7/2/2008 3:31:00 PM , Rating: 2
Niiice.


RE: Mixed Metaphor
By GroBemaus on 7/2/2008 3:49:05 PM , Rating: 2
Yeah, nice. I've heard that before somewhere...


RE: Mixed Metaphor
By blaster5k on 7/2/2008 5:10:06 PM , Rating: 5
Well, I'll be damned. I actually had little difficulty reading that. You learn something new every day...


RE: Mixed Metaphor
By Alexstarfire on 7/2/2008 5:51:23 PM , Rating: 3
Already knew that, but it just goes to show how great an average brain is. Pattern recognition is off the charts.


RE: Mixed Metaphor
By jordanclock on 7/2/08, Rating: -1
RE: Mixed Metaphor
By gaakf on 7/2/2008 7:48:59 PM , Rating: 1
THAT IS FUCIKNG AWESOME!!!


RE: Mixed Metaphor
By Clauzii on 7/3/2008 5:11:10 PM , Rating: 2
And so is DT's rating system if You F-Word it :)


RE: Mixed Metaphor
By loki7154 on 7/3/2008 12:26:59 AM , Rating: 3
This is clearly wrong. For instance, compare the following three sentences:

1) A vheclie epxledod at a plocie cehckipont near the UN haduqertares in Bagahdd on Mnoday kilinlg the bmober and an Irqai polcie offceir

2) Big ccunoil tax ineesacrs tihs yaer hvae seezueqd the inmcoes of mnay pneosenirs

3) A dootcr has aimttded the magltheuansr of a tageene ceacnr pintaet who deid aetfr a hatospil durg blendur

All three sentences were randomised according to the "rules" described in the meme. The first and last letters have stayed in the same place and all the other letters have been moved.

http://www.mrc-cbu.cam.ac.uk/~mattd/Cmabrigde/


RE: Mixed Metaphor
By gaakf on 7/3/2008 12:53:04 AM , Rating: 2
Actually I found all three sentences easy to read. Not only because the first and last letters were the same but also because of....CONTEXT.


RE: Mixed Metaphor
By SlyNine on 7/3/2008 3:56:51 AM , Rating: 2
I had no problems reading that.


RE: Mixed Metaphor
By ZmaxDP on 7/2/2008 7:16:11 PM , Rating: 5
Well, since you're the spelling police, your first sentence should read:

"Well, since you're on metaphors..."


RE: Mixed Metaphor
By Oregonian2 on 7/2/2008 1:13:18 PM , Rating: 2
And by my observation his comments are, "spot on". "Orifice" is an absolutely perfect word for what he wanted to say, in many ways.


RE: Mixed Metaphor
By Yojimbo on 7/2/2008 3:04:23 PM , Rating: 2
quote:
ain't noting wrong with that


the point of the mixed metaphor post was that orifices aren't in general surmounted, it's sounds silly.


RE: Mixed Metaphor
By masher2 (blog) on 7/2/2008 3:55:15 PM , Rating: 3
> "the point of the mixed metaphor post was that orifices aren't in general surmounted."

Depends...have you never been to a Star Trek convention?


RE: Mixed Metaphor
By Oregonian2 on 7/2/2008 7:13:15 PM , Rating: 2
More than that. It's first identifying the new-everything as an orifice. Then it next points out that orifices are generally insurmountable ("not prevailed over") due to the constriction that it provides in achieving the end goal. Sounds like a sequential logic presentation to me.

Orifices are indeed surmounted sometimes (some pron movies perhaps) but that's a different subject....


Hmm..
By Clauzii on 7/2/2008 3:39:50 PM , Rating: 2
"The Sony Cell architecture illustrates the point according to Gelsinger. The Cell architecture promised huge performance gains compared to normal architectures, but the architecture still isn’t supported widely by developers."

So all the PS3 games out there are pure air?? Come on Mr. Gelsinger, just because You like x86 doesn't mean that new techniques and programming models will not emerge and be used. It takes time, Yes, but it's probably worth the work. Else I'd still be on C64 :-/




RE: Hmm..
By kilkennycat on 7/2/2008 4:19:12 PM , Rating: 5
quote:
Come on Mr. Gelsinger, just because You like x86...


Tied to it by his company's business-model umbilical cord. He has no choice.


RE: Hmm..
By PJMODOS on 7/2/2008 4:21:09 PM , Rating: 3
And every game programmer I talked to hates the cell cpu in PS3 with passion.


RE: Hmm..
By CBone on 7/2/2008 6:46:11 PM , Rating: 4
quote:
So all the PS3 games out there are pure air??


Obviously he wasn't talking about game developers at all. Duh. PS3 is only the most public use of Cell, not the only intended one.


RE: Hmm..
By Clauzii on 7/3/2008 5:14:30 PM , Rating: 2
It was only to show that there ARE different approaches to accomplish a task. I'm not here to bash one tech over the other. But when something excites me, like the CBE, I want to drop a word or two.


RE: Hmm..
By dwalton on 7/16/2008 4:09:00 PM , Rating: 2
I think his statement revolves around general computing not gaming software and even then most of the early dev and art content takes place on x86 hardware.


Intel is wrong as usual
By hanishkvc on 7/2/2008 1:51:33 PM , Rating: 4
Hi,

First
---------
First of for general purpose computing, intel was wrong when it originally abondoned X86 and went with Ia64 (itanium) for 64 bit architecture.

However wrt Special/Compute intense applications Intel is again wrong when it is proposing x86 instead of a effecient Vector capable SPs in these array of processors.

Next
--------

The complexity in Compute intense applications is more to do with how to efficiently utilize the processing power available in the array of cores (which includes processing as well as data access and their grouping). And thus this issue remains independent of whether the processing cores are x86 or some thing else like in NVidia or ATI SPs.

And even in x86 cores if SIMD elements are there they require to be triggered
a) either in assembly or
b) using compiler intrensics or
c) by using modified/updated languages or compilers

And the same applies for NVidia/ATI SPs.

Conclusion
--------------
What the intel guy is talking is FUD which requires to be kicked in the butt.




RE: Intel is wrong as usual
By jjunos on 7/2/2008 3:16:43 PM , Rating: 2
quote:
First of for general purpose computing, intel was wrong when it originally abondoned X86 and went with Ia64 (itanium) for 64 bit architecture.


Doesn't that actually put credence to what the Intel guy said? Intel tried the specialized programming model...and failed?

quote:
The complexity in Compute intense applications is more to do with how to efficiently utilize the processing power available in the array of cores (which includes processing as well as data access and their grouping). And thus this issue remains independent of whether the processing cores are x86 or some thing else like in NVidia or ATI SPs.


true, but if I had to put money on who would do it better (better efficiency, better tuning, etc) wouldn't you want to go with the programming model that's been around for a signficantly longer time that CUDA and has the developers who are accustom to the model itself?

Not saying Intel is entirely right, CUDA has shown to be pretty adept at certain areas...


RE: Intel is wrong as usual
By soydeedo on 7/2/2008 3:40:46 PM , Rating: 2
quote:
quote:
First of for general purpose computing, intel was wrong when it originally abondoned X86 and went with Ia64 (itanium) for 64 bit architecture.
Doesn't that actually put credence to what the Intel guy said? Intel tried the specialized programming model...and failed?

No, that was his point in that it was the wrong direction for general computing, but for specialized purposes you want to squeeze every last bit of performance out of the architecture. Granted, if there are minimal performance losses for making it a bit more like an established model then it may be worth the trade-off.

On that note, isn't CUDA a bit like a handicapped version of C? So it's not all that much of a departure then, eh?


RE: Intel is wrong as usual
By allajunaki on 7/3/2008 12:58:38 AM , Rating: 2
Well,
Whats Ironic? Intel still sells Itanium2. And they still have development team working on them.
EPIC architecture (Itanium2) is designed with same intentions as IBM Cell or nvidia and ATI with their GPU based approach is doing.. Attempting a revolution (instead of an evolution).

And if u read about Itanium, Intel is still convinced that itanium has a future.

And whats makes it funny (for me atleast) is that I code for Itanium based Servers... :)


By decapitator666 on 7/6/2008 7:01:49 AM , Rating: 2
In my eyes The itanium is to computing what duke nukem 2 is to games.. the continuous promise for the future.. Over 10 years of development constant improvements and reworks.. ;-)


RE: Intel is wrong as usual
By ET on 7/3/2008 1:49:41 AM , Rating: 2
I agree and disagree.

I agree that the paradigm shift is needed anyway (and is already happening in the multi-core world).

On the other hand, there's also the platform shift, and here Intel has an advantage. CPU's already have SIMD (in the form of SSE) and if intel sticks to the same x86 (or x64) machine code, then it'd be possible to take advantage of the power of these processors immediately, without a need of learning anything new. That would likely mean taking very bad advantage of them, but it'd still be more than what's possible given a completely different architecture.

I think that's where Intel's advantage lies. Tools are extremely important for development, and if Larrabee allows using standard tools, then that'd be an advantage for Intel.


Intel Is Scared
By AggressorPrime on 7/2/2008 1:52:55 PM , Rating: 5
For Gaming
What else can we expect Intel to say, that a competitor is making a product that will steal the industry? Take it from the game programmers' mouths, not Intel's.
1. What has Intel ever done for games? EE? Yeah, take a $500 CPU, add $1000 to the price tag, add 200MHz, and make it easier to OC (a feature all CPUs shared in the past). Spending the extra $1000 of GPUs will give you a lot more for your money. And you also must consider that even with Intel's fastest CPU's, they are still a bottleneck. No CPU can fully handle 3 GeForce GTX 280's at stock speeds. So instead of waiting for a powerful CPU to do physics, we had Ageia be born. And when they failed to sell in volume, nVidia bought them. Now every gamer with a DX10 card (nVidia + AMD) can now have much faster physics than any CPU can process. Larrabee may be faster than the CPU, but it can't compete against the GT200 giant. The best bet Intel has is to replace the standard CPUs in gaming machines with Larrabee CPUs.

For Videos
Intel again failed in this accord. For decoding, you can simply buy a $50 GPU to process 1080p Blu-Ray's without lag. You would have to spend $200 on a CPU to get the same effect, either a fast dual core or a quad core. And then we have encoding. Has Intel even seen the huge performance gains seen on the GT200 processing encoding of HD videos?

Other
Intel is saying that CUDA and really GPGPU in general is difficult to use, so no one will use it. Yet in addition with the things mentioned above, we have seen PDF's started to be rendered by the GPU, not the CPU, a general application! And the creators of Photoshop said they would program Photoshop to use the GPU due to its advantages. GPUs are also used to process wide-scale supercomputer applications like Folding@Home. All these applications would run much slower than the CPU.

Parallel Age
We are entering the age of parallization, it started with Intel's HT, continued with AMD's dual core, and became mainstream when GPU's started processing general applications. Developers who want the best performance for their applications look to nVidia's and AMD's GPUs for maximum performance, they look to the CPU only when the GPU can't do what the CPU can. And with the GT200, the gap shrunk again, with nVidia's introduction of double precision floating point operations. The only thing really missing is RAM, but yet you can get 4GB per GPU with the CUDA cards themselves. The best use of the CPU today is to manage the traffic of high powered GPUs.

Roadrunner
http://en.wikipedia.org/wiki/IBM_Roadrunner
(Read "Hybrid design")
And may I remind Intel that Cell didn't drop off. Not only does it power the PS3, it made it's way into the most powerful supercomputer on the planet, except of course when you consider supercomputer wide-scale networks like Folding@Home which use GPUs to make them the most powerful. Of course Intel probably didn't get the memo concerning its construction since it uses AMD CPUs, not Intel's.

Conclusion
The CPU is still important, but CUDA's power must be realized, not discouraged. Intel's Larrabee should not be seen as a replacement for CUDa, for I truly doubt it can come close to the performance we have seen from nVidia's GPUs, and considering its launch date of late 2009/early 2010, we still have at least one more generation of GPUs which most likely will have 2x the performance of last generation, giving us 2 teraFLOPS on a single chip. Intel's Larrabee should be a replacement for the CPU, like what we see with gaming consoles. It is still x86 so it should still work well with general applications, but it will also better be able to aid the GPUs to remove the CPU bottleneck.




RE: Intel Is Scared
By Clauzii on 7/2/2008 3:44:57 PM , Rating: 1
I don't see why this got down to -1???


RE: Intel Is Scared
By AggressorPrime on 7/2/2008 6:12:02 PM , Rating: 3
Lol, looks like some Intel employees view this site.


RE: Intel Is Scared
By CBone on 7/2/2008 6:56:52 PM , Rating: 5
It got rated down because it reads like Nvidia PR comments.


RE: Intel Is Scared
By DingieM on 7/3/08, Rating: -1
Intel doesn't feel threatened
By pauldovi on 7/2/2008 1:21:23 PM , Rating: 5
No! Can't be. I mean its not like they responded to some insignificant technology that will just be a footnote in computing... Oh wait, they did.




By wordsworm on 7/2/2008 1:40:52 PM , Rating: 2
I think this is about the same as IBM saying that no one will want a GUI on their computer.

For some interesting reading... if I wasn't so busy with other studies and work I might give it a go myself (but really, it doesn't seem to serve any practical purposes for someone like me)
http://www.gpgpu.org/sc2007/


Folding At Home
By phorensic on 7/2/2008 1:31:02 PM , Rating: 5
For us folders, we already have this huge performance boost using CUDA on GPU's to fold. If Intel can give us similar or greater performance using their method, give it to us! For now, CUDA is a miracle to me, and I don't see it disappearing all of a sudden..




RE: Folding At Home
By Clauzii on 7/2/2008 4:00:22 PM , Rating: 2
Intel have this 80 core one teraflop chip. ATI uses 160 (800 units) for the same amount of power. BUT in a product that's actually purchasable. So IF Intel want's a bite of the GPU cake using x86 code, they'd better get at least a 160 core unit out SOON!


CUDA a joke ???
By kilkennycat on 7/2/2008 4:45:04 PM , Rating: 2
quote:
According to Gelsinger, programmers simply don’t have enough time to learn how to program for new architectures like CUDA. Gelsinger told Custom PC, “The problem that we’ve seen over and over and over again in the computing industry is that there’s a cool new idea, and it promises a 10x or 20x performance improvements, but you’ve just got to go through this little orifice called a new programming model. Those orifices have always been insurmountable as long as the general purpose computing models evolve into the future.”


Er, Mr. Geisinger, have you by any chance asked those in the oil industry and in the weather forecasting business, in technical academic research and in high-tech engineering industries and in the professional image-processing industries whether CUDA is a joke?? Those with very deep pockets and time-critical computation needs not at all well served by traditional CPU farms. nVidia and ATi bring lots of parallel-computation power to these types of applications currently being publicly touted by IBM and Toshiba as being ideal applications for multiprocessor farms based on the Cell processor. Even in the semi-pro domain, Adobe is sure going full-bore on integrating GPGPU horsepower into their upcoming CS4 efforts. Is Intel afraid of being left out in the cold, with too little too late ?? Larrabee as currently proposed looks as if it may fall into the crack of neither serving central processing needs well nor graphics/parallel-processing. The only really bright spot for Larrabee might be in the lap-top business, finally solving Intel's IGP problems while bringing decent compute-perform




RE: CUDA a joke ???
By Mitch101 on 7/2/2008 6:06:40 PM , Rating: 2
Why is it that no one thinks Intel can make a graphics chip?

Intel never tried to really make a real 3d chip before. Their bread and butter was always corporate desktops which didn't need 3D. They only added enough to make the OS work well. They have just enough in the chip to do Vista but they were never designed to be graphics chips. Somehow I don't believe their current chips were designed with Crysis in mind. That's why there is a PCI-E slots on the mobo.

Somehow I think Intel will figure out how to draw a polygon and do it really fast.

Until you spot a PCI-E graphics card from Intel I would suggest not doing the NVIDIA and running off at the mouth.

Somehow I think the Intel bunny men engineers will have the last laugh when the time is right.

BTW Intel's bottom line seems to be doing incredibly well without a 3D graphics card to save them from bankruptcy and a GPU doesn't run very well without a mobo and cpu but a CPU runs quite well without a GPU. GPU's are dependent on a CPU still until they run withouth a CPU the GPU will need to play catchup.

Did everyone catch the call about NVIDIA lowering their revenue forcast?


RE: CUDA a joke ???
By SavagePotato on 7/3/2008 10:55:15 AM , Rating: 1
Most likely because Intel currently makes very lousy IGP solutions, and did indeed enter the discreet graphics arena once already. Which was a dismal failure.

The intel i740.
http://en.wikipedia.org/wiki/Intel740

We have one of those sitting on a shelf collecting dust somewhere in the box.


RE: CUDA a joke ???
By CBone on 7/2/2008 6:54:13 PM , Rating: 2
He's right. It is a pain to have to learn new programming models based on the huge performance gains touted, to only get a fraction of that out of it in the end. Those industries you mentioned would like to not have to use CUDA or CTM or any other new language. Do you know how much work it would take to troubleshoot and rewrite all of their code to be compatible or make new software from scratch?

quote:
Er, Mr. Geisinger, have you by any chance asked those in the oil industry and in the weather forecasting business, in technical academic research and in high-tech engineering industries and in the professional image-processing industries whether CUDA is a joke??


That sounds good, but it's a lot of work, new hardware, and research just to use them. If Larrabee can bring the speed while letting a company's current programming staff use what they already know, Intel will win if it can come anywhere close to their performance estimates.


X86 ISA vs custom ISA...
By encia on 7/2/2008 11:19:25 PM , Rating: 2
Let's see Intel Core 2 Quad @3Ghz + Swiftshader 2.0 beat AMD Radeon 9600 Pro (quad-shader pipeline) + Catalyst 8.x in DX9b...




RE: X86 ISA vs custom ISA...
By jconan on 7/3/2008 2:48:08 AM , Rating: 2
gee... Radeon 9600 was how many generations old ago? It's like a young kid racing against an old man. That sure ain't an apples to apples comparison in terms of generation? Besides 9600 isn't as programmable as today's generation of gpus.


RE: X86 ISA vs custom ISA...
By encia on 7/3/2008 7:10:08 AM , Rating: 2
I’m just being generous to Intel. How about Radeon HD 3200 IGP or Geforce 8200 IGP?


PhysX GPU Acceleration on Radeon HD 3850
By mkruer on 7/2/2008 2:27:46 PM , Rating: 2
oops, I guess ATi and Nividia are not so incompatible as first thought.
http://www.ngohq.com/news/14219-physx-gpu-accelera...

So its either going to be ATI + Intel or ATI + Nvidia.




By Mitch101 on 7/2/2008 4:04:22 PM , Rating: 2
Posted that about a week ago and now I am starting to feel unless they release some code soon it could be time to call shins.


Ball is in the nVidia/ATI court
By djc208 on 7/2/2008 2:37:42 PM , Rating: 4
This statement will be made or broke by ATI/AMD and nVidia. If they can get some games with physics offloaded to their cards, get out the transcoding software, folding and similar applications then I think the market will quickly grow into this area. Learning curve or not.

How many gamers wouldn't love to be able to use all that processing HP in their GPU for something other than the latest game and Windows AERO interface?

The increased demand will bring other companies and more money and people willing to spend more for a better graphics card knowing it will do more than just make games look pretty. How long before some company uses one in a stand-alone device to do video transcoding and playback or similar high throughput work.

On the flip side if they don't help the initial pioneers of this technology show it off it could easily all crumble around them as Intel predicts.




More Geisinger hot-air...
By kilkennycat on 7/2/2008 4:14:44 PM , Rating: 4
Geisinger sure is doing well with his out of water marketing-driven fish-flops since all he has on Larrabee at the moment is paper. Hopefully, he won't find out in a year or two that it is the toilet variety....

Geisinger's pronouncements without any hardware and only vague architectural models to back them up is a betrayal of Intel's anxiety over nVidia's (and ATi's) GPGPU exercises.
No doubt nVidia is making GT280s and extensive programming support readily available to any professional apps and leading games developer that requests their assistance and probably working with MS on appropriate extensions to their DX API. MS is no doubt not pleased that Intel has publicly declined to get on board in-house with Vista, plus shafted by Intel on their Vista-Ready initiative, so why should they hang around for 1.5 years waiting on Intel? No love lost there.... And since AMD is really struggling in the CPU business, it would only make sense for their currently more successful ATi offshoot to follow nVidia's lead in the GPGPU arena to blunt any potential impact on their CPU business from Larrabee.




By EclipsedAurora on 7/2/2008 10:14:41 PM , Rating: 4
Actually Intel also push out a lot of technology that require programmers to completely re-learn all of their previous program model in order to gain the performance potential. MMX/SSE is an excellent example.

For Cell, actually Sony just follows her success in PS2. While PS2 CPU, the emotional engine, has different vector engine that require difference compiler, it promised breakthough performance if it is programmed well. Of course, programmers in that time complained about the complexity of programming. But as long as PS2 gain the market share, it force developers to learn!

Learning is always the path to success. Today even x86 programmers have to learn to parallel threading programming in order to squeeze the performance on dual-core/multiple core x86 CPUs!

To stuck with x86 is one of the many technically wrong route of Intel. With years of expansion from the original 8086 microprocessor, the x86 instruction set is just a clutter. That's one of the reason why Intel processor can never match up with IBM POWER in server/super computer market.




quote from article
By blwest on 7/2/2008 6:35:05 PM , Rating: 3
quote:
According to Gelsinger, programmers simply don’t have enough time to learn how to program for new architectures like CUDA


This is totally bogus, if you can code C, you can develop for CUDA, it's all handled with their compilers.

Not to mention that 1 NVidia GPU is 20-25 times as fast as an 8 core Intel Xeon processor system. I recently switched all of my real time code processing to a single $600 GPU.

Intel is just scared that people will really find out how gimped their processors really are.




lol@Intel bragging
By Stinky007 on 7/3/2008 2:09:17 AM , Rating: 2
Intel leads the integrated chips market much in the same way Microsoft "dominates" the browser market: by forcing sheet down our throat!
Whatever happens, I don't want Intel or nVidia setting standards anymore. Corporations tend to get all individualistic when they set standards by themselves! I think Intel, nVidia aaand AMD should get together and decide what's the best way to deal with all this cheap computing power that lurks in the GPU. I want to be able to write complex programs that use the GPU without worrying I have to port it for each individual card :(




By dickeywang on 7/3/2008 5:28:43 AM , Rating: 2
1) Limited on board memory. You can see the problem simply by dividing those Flops numbers posted by Nvidia with current GPU's on board memory size. The GPU itself can process floating point data really fast, but high performance computing usually also requires very large memory size. GPUs Flops numbers is hundreds times the numbers of CPUs, but when considering RAM per GPU/CPU, they are comparable. So if the memory requirement of your simulation requires 4000 CPU nodes, you will also need 4000 GPU nodes if you use CUDA.

2) The reason GPU is more powerful than CPU when dealing with raw floating point data is because GPU has so many stream processors. However all these stream processor share the same memory, so if you are only doing simulations on one GPU, you need to write a code in a "share-memory" way, however, if you want to do some serious numerical work, you probably need to have hundreds or more GPUs running parallel, which means you have to deal with the local "share-memory" nature of each GPU, but also deal with the parallelization of your code for the entire cluster. Looks to me it would require a "OpenMP+MPI" types of structure of code.

I think if Nvidia wants CUDA to be popular in the high performance computing community, these two problems need to be addressed. If (a large portion of) these two problems can not be dealt with by the compiler itself but requires a lot of work by the programmer, it will be tough for the high performance computing community to accept CUDA.




By encia on 7/3/2008 7:18:05 AM , Rating: 2
One can run CUDA applications on X86 CPUs btw i.e. look in C:\Program Files\NVIDIA Corporation\NVIDIA CUDA SDK\bin\win32\EmuRelease\ folder.




so...
By sprockkets on 7/2/2008 2:08:12 PM , Rating: 1
Why don't we just put a C2D on a PCI-E 16x card and on a 1x card for video and physics? That sounds like we should just stick to x86 for the rest of eternity.

Fine, if it works. But maybe we shouldn't then care about going to Itanium either, because learning that and making stuff for it will also just be, a multi billion dollar footnote in your history, Intel.

Then again, if you are just as willing as you are now for open drivers for Linux, then OK haha.




pwnd
By Aberforth on 7/2/08, Rating: -1
RE: pwnd
By mfed3 on 7/2/08, Rating: 0
RE: pwnd
By Aberforth on 7/2/2008 1:45:17 PM , Rating: 2
well, Larrabee can do over trillion FLOPS/S.


RE: pwnd
By nosfe on 7/2/2008 1:49:35 PM , Rating: 3
i think that you mean gazillions of bazillions/second, right?


RE: pwnd
By Aberforth on 7/2/2008 1:51:12 PM , Rating: 2
that's right. They said so last year at IDF.


RE: pwnd
By comc49 on 7/2/2008 1:54:16 PM , Rating: 2
um 4850 can do teraflop and if larrabee's power compsumption is high as the rumor say it should have more flops


RE: pwnd
By psychobriggsy on 7/2/2008 4:42:01 PM , Rating: 2
Larrabee is only going to be remembered for one type of FLOP, and it has nothing to do with floating point mathematics.

4850 on 55nm does a teraflop today, for $200. OpenCL will become the standard language for programming these devices.

Larrabee is not here today, only suggested to do a teraflop, is anchored to using an old ISA that is simply not relevant when writing NEW code for parallel systems. Programmers aren't programming x86 directly these days, OSes and apps are easily portable between architectures, if the will is there.

I don't think it is a stretch to assume that by the time Larrabee is available with working drivers (another failing of Intel when it comes to graphics) that 2 teraflops will be standard on AMD and nVidia cards for $200. Will Intel sell Larrabee based cards for $100? They have the production capacity, but I don't know if they'd sell, especially if there are early driver issues.


RE: pwnd
By Elementalism on 7/2/2008 2:48:02 PM , Rating: 2
Special purposes. Which is how we arrived at a GPU in the first place. Before 1996 a true 3d accelerator in the consumer space for video games was non-existent. All 3d was done in software via the CPU. The result was slow slow slow. During the switchover from software to hardware. All one had to do was pop in a Voodoo with a game that supported Glide and you got better visuals and about 10x the performance.

x86 has its limitations. Not terribly good as parallel tasks like a GPU. The Athlon had 9 execution units but could barely crank out 1 instruction per cycle.


RE: pwnd
By Clauzii on 7/2/2008 3:52:07 PM , Rating: 2
The Athlon was theoretically 3 instructions pr. cycle. Practically 2 was achieved.


RE: pwnd
By encia on 7/2/2008 11:25:10 PM , Rating: 2
You can slowly "run" DX8 and DX9b titles on modern X86 CPU (e.g. Intel Core 2 Quad @3Ghz) via SwiftShader 2.0.


RE: pwnd
By FITCamaro on 7/2/2008 1:43:59 PM , Rating: 2
This has nothing to do with gaming. CUDA isn't for gaming. It's for running general purpose code on a GPU. Larrabee will be a GPU but it will use many x86 cores instead of a single large die thats a more dedicated chip for graphics processing. This will allow it to easily run general purpose code other than graphics because programmers don't have to learn a new language.

DX10 has nothing to do with this either. There will be a DX11 which introduces new functions to add to the current DX API. And many more new DX specifications after it.


RE: pwnd
By nosfe on 7/2/2008 1:52:08 PM , Rating: 3
it also means that it'll suck at gaming(probably) because x86 isn't the end all be all of chip architectures, probably why graphics cards haven't ever used it


RE: pwnd
By FITCamaro on 7/2/2008 2:23:08 PM , Rating: 3
That was not my point, hence why I didn't say it would be good at gaming. It might be. It'll certainly be better than an integrated chipset. My point WAS that Larrabee will allow general purpose tasks on a GPU without developers having to learn a new language.


RE: pwnd
By omnicronx on 7/2/2008 2:49:55 PM , Rating: 2
Which pretty much totally negates the purpose doesn't it?
I really do not see how you can use current x86 instruction sets to come even close to even the most basic CUDA code can do. Intel will have to add instruction sets (which they already plan to do) just to become somewhat competitive with Intel and AMD offerings.

Intel also makes it seem like all programmers out there one day will be able program for this GPU without a huge learning curve just because it uses the x86 instruction set. Well this is just not the case, there will be a huge learning curve regardless of what architecture is used, if that means a little bigger curve for something that is x fold better, than so be it. We are talking high end market here, in which there will always be a limited market and chances are the programmers that are employed to do these jobs will have extensive experience, and will know what they are doing.

This is CUDA differs from say Sony's Cell processor, its not meant to be a mainstream product, its meant for those who want to squeeze every extra bit of performance they can from the available hardware, regardless of the learning curve.


RE: pwnd
By Mitch101 on 7/2/2008 4:02:09 PM , Rating: 2
The biggest mistake AMD/ATI and NVIDIA can make is saying I do not see how Intel can use current x86 instruction sets to come even close to CUDA etc. Arrogance will be their demise and NVIDIA is full of it.

Cell was a wake up call to AMD/Intel that maybe they shouldnt go the huge monolithic way. NVIDIA just recently demonstrated huge monolithic parallelism and while the chip is a great feat the significantly smaller ATI chip is nipping at its heels and could very well topple it when the next one comes out.

ATI/AMD got the hint with cell and is going with smaller more efficient cores in parallel and most recently were starting to see this pay off.

Intel with their chip is essentially doing something similar to cell however its x86 based so developers can program for it immediately plus it wont suffer from lack of parallel execution like a CPU or cross communication. If they throw in an extensions of SSE in the form of a large number of 3d graphics enhancements with the parallel capability of the new CPU with a touch of physics they will have something good. First gen probably wont be the killer but it will scale to be a killer.

Dont forget Intel hired that ray trace guy whom I would bet is developing a game engine for the new chip. Intel can afford to give the engine away for free to get game companies on board and to sell chips.

Lastly lets throw in the word WINTEL - Windows and Intel are buddies much more thanks to the way NVIDIA tried to kick Microsoft in the Jimmies with their X-Box GPU chips. Windows DX11 might benefit Intel a lot more than NVIDIA again its what they don't see coming that they should be afraid of. Dont forget most of Vista's problems were caused by NVIDIA drivers.

An NVIDIA price war with ATI is nothing compared to what Intel can do to NVIDIA. Die shrink this!

I used to really pull for NVIDIA but their bully tactics, arrogance, and denying of DX10.1 because of TWIMTBP developers base because they still dont have it and everyone else should suffer because means they are going to get what they deserve in the end.

Watch the movie 300 and replace NVIDIA with the Spartans. They will put up a good fight but it wont last against a significantly larger army. Intel might not have Spartans on the first round of their first real attempt at graphics but eventually they will. They still have to deal with ATI in the war also.


RE: pwnd
By omnicronx on 7/2/2008 4:59:33 PM , Rating: 2
You seemed to have missed the biggest point of my post, you can't compare Cell architecture which was designed for mainstream consumer use, with a GPGPU only architecture such as CUDA. Where as CUDA will only be used in an environment in which you want to squeeze every extra bit of performance, and to tell you the truth, I really don't see programmers having a problem. And just so you know, CUDA is incredibly similar to stripped down version of C, so its not going to be light and day here either.

I also think over estimate the power of Intel. A line of GPUS will require different new Fabs, as I don't see them just taking over current CPU fabs, especially when the die sizes are totally different in the GPU world. Its not like Intel can just use all of its resources and just disregard their CPU line. Anyway you look at it, Intel is going to be playing catchup, and personally I don't think that a GPU line that scales across 3 separate platforms (GPU, Mobile GPU, GPGPU) is the answer. I would only expect AMD and NVIDIA to turn on the afterburners to distance themselves from an already distant Intel.


RE: pwnd
By Mitch101 on 7/2/2008 5:23:54 PM , Rating: 2
Sure you can a it all comes down to IPC but you have to consider what kind of IPC's it will process.

Intel's wont require a new fab or anything special there is no magic in making a chip. Somehow I think Intel's engineers and better at chip fabbing than NVIDIA.

To underestimate Intel would be the kiss of death. I would believe you over estimate NVIDIA.

If AMD had any kind of after burners they would have used it on the CPU. These miracle afterburners are just imaginary thoughts. NVIDIA has no afterburners either otherwise they wouldn't be worried about ATI's R770 is it?

However Intel does have magic afterburners. Probably 32nm ones soon. Where NVIDIA doesnt own any and must settle for 55nm ones.

Even if Intel doesn't take the crown they only need to get to the mainstream level and they can kill NVIDIA with price.


RE: pwnd
By Mitch101 on 7/2/2008 5:29:53 PM , Rating: 2
Nvidia expects lower second-quarter revenue, gross margin

http://www.marketwatch.com/News/Story/Story.aspx?g...

Even a god king can bleed.


RE: pwnd
By encia on 7/2/2008 11:46:31 PM , Rating: 2
Refer to http://www.tgdaily.com/content/view/38145/135/
"Watch out, Larrabee: Radeon 4800 supports a 100% ray-traced pipeline".

The Transformers teaser trailers were raytraced on a GPU in real time.


RE: pwnd
By Mitch101 on 7/3/2008 10:19:53 AM , Rating: 2
That's something not many review sites even mentioned. When I read it I wondered why no one explored it further.


RE: pwnd
By omnicronx on 7/2/2008 2:28:42 PM , Rating: 2
It sure does have to do with gaming, the Larrabee is not only GPGPU, it will be Intels next discrete graphics card. Really this idea seems stupid to me, the entire point of having different instruction sets for GPU's is that x86 architecture is just plain not very efficient. In 1996 this may have been a good idea, as 3D acceleration was in its infency, but today, there is no such programmer difficulty that intel is suggesting, in fact it may very well be harder to go backwards at this point for something much less efficient.

They also seem to leave out of the article that Larrabee will only be able to process information in order, which could further complicate things. (As PPC users would know, in order processing is a large bottleneck.)

While I do agree on the GPGPU front that x86 programming may be much easier than Nvidias CUDA, Intel is trying to make it out as though this is going to revolutionize the GPU market, when in reality its going to make it a hell of a lot more complicated.

If you guys really think this is a good idea, wait until intel starts implementing the extended SIMD (similar to SEE)instruction set that nobody else is going to have other than Intel (which is pretty pointless as AMD and NVIDIA own the gaming market, its going to be pretty hard to sway programmers to use these instructions when almost all current gaming GPUS belong to AMD and NVIDIA, i.e why code for a tiny percentage of the gaming population), so much for making things easier...


RE: pwnd
By nafhan on 7/2/2008 2:43:16 PM , Rating: 1
Just to comment on your "in order processing is a huge bottleneck" comment.
That's only true if you have a single in order core. With multiple in order cores working on seperate threads, it's not as much of an issue. For the most part, GPU's have been in order, and I think Intel's Atom processor is in order as well. In order processors provide significant transistor and power savings over their out of order counterparts.


RE: pwnd
By Elementalism on 7/2/2008 2:51:28 PM , Rating: 2
afaik Itanium is also in order.


RE: pwnd
By omnicronx on 7/2/2008 3:06:47 PM , Rating: 3
quote:
Just to comment on your "in order processing is a huge bottleneck" comment.
That's only true if you have a single in order core. With multiple in order cores working on seperate threads, it's not as much of an issue
I really can not agree with you, you brought up the atom so I will use it as an example. The current Intel 1.6GHZ atom can barely compete with the old celeron 1.2ghz, in which the celeron actually beats it out on most tests. Although this is a single core processor, Intel had to bring back a new implementation of hyperthreading just to bring up the performance to somewhat of a respectable level. So even with multiple threads being excecuted at the same time, performance was at least 1/3 below what a 3 year old out of order processor can do.

I also understand about the power savings with in order processing, but really in a GPGPU who cares? Intel is trying to come out and say they have the holy grail of GPUS that can do just about anything from laptop to desktop to high end GPGPU computing, when in reality they have come up with a unified architecture between all three with one small hiccup, its seems to be far less efficient in two of those fields. (desktop and GPGPU markets)

The way I see it, an all in one solution has never been as good as a standalone product that curtails to the certain area or market. I do give the nod in the fact it seems they have found a way to unify their archecture along all of its lines, but in the end will this be better for Intel or for the consumer, my guess is the later, but what do I know ;)
In the end, only time will tell.


RE: pwnd
By encia on 7/2/2008 11:29:28 PM , Rating: 2
Run Swiftshader 2.01 on Intel Atom (or Intel Core 2 Quad@3Ghz) vs ATI Radeon 9600. Tell me one which is faster in DX9b.


RE: pwnd
By sa365 on 7/3/2008 8:40:46 AM , Rating: 2
Who now thinks AMD's purchase of ATI was a bad move?


"If you can find a PS3 anywhere in North America that's been on shelves for more than five minutes, I'll give you 1,200 bucks for it." -- SCEA President Jack Tretton

Related Articles
GeForce 8 To Get Software PhysX Engine
February 15, 2008, 10:33 AM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki