backtop


Print 37 comment(s) - last by Trisped.. on Nov 21 at 3:42 PM

CPU and GPU all in one to deliver the best performance-per-watt-per-dollar

AMD today during its analyst’s day conference call unveiled more details of its next-generation Fusion CPU and GPU hybrid. Early mentions of Fusion first appeared shortly after AMD’s acquisition of ATI Technologies was completed a few months ago. AMD is expected to debut its first Fusion processor in the late 2008 to early 2009 timeframe.

AMD claims: “Fusion-based processors will be designed to provide step-function increases in performance-per-watt-per-dollar over today’s CPU-only architectures, and provide the best customer experience in a world increasingly reliant upon 3D graphics, digital media and high performance computing.”

The GPU and CPU appear to be separate cores on a single die according to early diagrams of AMD’s Fusion architecture. CPU functionality will have access to its own cache while GPU functionality will have access to its own buffers. Joining together the CPU and GPU is a crossbar and integrated memory controller. Everything is connected via HyperTransport links. From there the Fusion processor will have direct access to system memory that appears to be shared between the CPU and GPU. It doesn’t appear the graphics functionality will have its own frame buffer.

While Fusion is a hybrid CPU and GPU architecture, AMD will continue to produce discrete graphics solutions. AMD still believes there’s a need for discrete graphics cards for high end users and physics processing.

Also mentioned during the conference call is AMD’s new branding scheme regarding ATI products. Under the new branding scheme, chipsets for Intel processors and graphics cards will continue on with the ATI brand name. ATI designed chipsets designed for AMD platforms will be branded under AMD as previously reported.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: Physics acceleration
By MonkeyPaw on 11/17/2006 3:07:43 PM , Rating: 2
There's more to it than just bandwidth. Where an on-die GPU will depart from traditional graphics is because the IGU will be communicating directly with the CPU on each clock cycle. By accessing the L2 cache, the latencies will be considerably lower than by going through a NB or a PCIe tunnel+HT link. As a PPU, Fusion could also shine, as Physics calculations benefit greatly from being in low-latency communication with the CPU. This is why AMD plans to have Torrenza for graphics and physics cards, among other things. As long as this IGU has a decent number of unified shaders (hopefully 16 by launch), I think it could do very well. I use integrated graphics right now, so this is an interesting development for me.


RE: Physics acceleration
By SexyK on 11/17/2006 4:56:08 PM , Rating: 2
But only a small amount of data is transferred from the CPU to the GPU on each clock cycle, at least compared to the massive amounts of texture data. There's a reason GPUs come with such large amounts of memory nowadays. This functionality may be useful for use as a PPU or GPGPU, but for pure pixel pumping, this thing will still have to go to system memory for texture data, which will be atrociously slow (at least as slow as current integrated graphics).


RE: Physics acceleration
By MonkeyPaw on 11/17/2006 6:33:16 PM , Rating: 4
Also keep in mind that consoles do this very thing. The XB360 for example, has the GPU and CPU side-by-side on the PCB, with the GPU actually connected to and controlling the memory. Consoles do rather well at some very intense applications with this simple setup. Granted, Console are not PCs, but the XB360 actually does quite a bit now in its GUI. Also consider that by 2008, we should see DDR3 enter the scene, which will undoubtedly provide yet another increase in memory bandwidth/performance.

I'm not going to claim that Fusion will outperform traditional stand-alone products, but I think the performance is going to be surprisingly good. I'll go out on a limb and say that such a solution will probably be good enough for just about everyone except the moderate to heavy gamer. It's hard to say just how good it will be until we hear specs and see products, but I can see a reasonably-spec'd IGP+CPU combo performing as well as the XB360. Imagine a notebook that is cheap, energy efficient, and powerful enough to game!


RE: Physics acceleration
By saratoga on 11/18/2006 3:22:22 AM , Rating: 1
quote:
There's more to it than just bandwidth.


Eh, not really. Bandwidth is all that matters in this case, since its what the CPU lacks most of all.

quote:
By accessing the L2 cache, the latencies will be considerably lower than by going through a NB or a PCIe tunnel+HT link.


Yeah, but thats not really useful for a GPU.

quote:
As a PPU, Fusion could also shine, as Physics calculations benefit greatly from being in low-latency communication with the CPU. This is why AMD plans to have Torrenza for graphics and physics cards, among other things.


Ok, but if you're just using the GPU as an inefficient DSP, thats something else entirely. If you just need vector throughput, why not put more SIMD/MIMD resources on the chip? After all, theres no reason current CPUs max out at 2x128 bit vector ops per clock. You could very easily add more, if there was demand for it.

Something like this would be interesting of course, but it seems odd to try and build a device thats both the CPU's GPU and it's DSP/SIMD engine. I have to question how well it would function in this hybrid role.

I think the main advantage is going to be power consumption and cost, with performance being a distant third (or more likely a disadvantage).


"I f***ing cannot play Halo 2 multiplayer. I cannot do it." -- Bungie Technical Lead Chris Butcher

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki