Print 69 comment(s) - last by mindless1.. on Nov 1 at 3:12 PM

The future of CPU/GPU computing

With the completion of AMD’s acquisition of ATI, AMD has announced its working on new CPU/GPU silicon that integrates the CPU and graphics processor into a single unit. The upcoming silicon is currently codenamed Fusion and is expected in the late 2008 or early 2009 time frame. AMD claims Fusion will bring:

AMD intends to design Fusion processors to provide step-function increases in performance-per-watt relative to today’s CPU-only architectures, and to provide the best customer experience in a world increasingly reliant upon 3D graphics, digital media and high-performance computing. With Fusion processors, AMD will continue to promote an open platform and encourage companies throughout the ecosystem to create innovative new co-processing solutions aimed at further optimizing specific workloads. AMD-powered Fusion platforms will continue to fully support high-end discrete graphics, physics accelerators, and other PCI Express-based solutions to meet the ever-increasing needs of the most demanding enthusiast end-users.

AMD expects to integrate Fusion for all its product categories including laptops, desktops, workstation, servers and consumer electronics products. Judging by the inclusion of PCI Express support, it would appear the integrated GPU is more of a value solution—similar to Intel’s cancelled Timna processor. It is unknown if AMD will retain the current Athlon and Opteron names with the launch of Fusion. This isn't too surprising as AMD and ATI previously promised unified product development including vague mentions of hybrid CPU and GPU products. AMD also previously announced its Torrenza open architecture as well.

In addition to Fusion, AMD expects to ship integrated platforms with ATI chipsets in 2007. The platforms are expected empower commercial clients, notebooks, gaming and media computing. AMD expects users will benefit from greater battery life on the next-generation Turion platforms and greater enhancements with AMD Live! systems. DailyTech previously reported on ATI's chipset roadmap which outlined various integrated graphics and enthusiast products.

With the development of Fusion and upcoming integrated AMD platforms, it is unknown what will happen to NVIDIA’s chipset business, which currently relies mainly on AMD chipset sales.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

Vertex Shaders
By dunno99 on 10/25/2006 11:30:24 AM , Rating: 4
I think more credit should be given to this setup. This solution lends to the possibility of breaking up GPU processes instead of merging them. As in, in addition to taking a small chunk of unified shaders and put them on the CPU, and be able to output directly from that (which was given), AMD could also segregate the on-chip shaders to purely vertex processing and relegate the geometry and fragment shading to the graphics card. Due to the nature of the GPU, data feeding is pretty much one way from the vertex shader to the rasterizer and geometry/fragment shaders. This means that the CPU to GPU overhead could be drastically reduced by processing vertex instructions on the CPU/GPU hybrid, then sending the data down through the PCIe bus to the add-on graphics card, cutting driver processing time by even half, I would say (vertex data shouldn't take that much bandwidth, I don't think).

RE: Vertex Shaders
By wired009 on 10/25/2006 12:14:25 PM , Rating: 2
The Fusion solution seems convenient and beneficial at first glance but you have to wonder if it's practical to implement. Imagine a batch of new processors. Say I want high CPU performance but don't need superior graphics because I don't play games. Will there be fast CPU - low end GPU, fast CPU - mid range GPU, fast CPU - fast GPU variations so mainstream users and gamers have a choice? What happens during the next CPU refresh and the fast CPU is now the low end or midrange CPU? It will be hard for AMD to contine to offer a certain variation if it is no longer in high demand. This is where Fusion begins to look like a very cost ineffective solution for AMD. It makes a lot more sense to keep CPU and GPU separate for marketing reasons and to keep manufacturing lines efficient. It is more likely that computers will move towards removable socket GPUs that attach directly to the motherboard with the elimination of AGP/PCI slots than they are to move towards Fusion.

RE: Vertex Shaders
By NullSubroutine on 10/27/2006 7:43:00 AM , Rating: 1
it actually makes sense if you consider if u need more graphical horsepower you could slap in an amd made (or nvidia) gpu in the torenzza's 'accelerator' socket.

RE: Vertex Shaders
By sdsdv10 on 10/25/2006 12:43:58 PM , Rating: 2
This means that the CPU to GPU overhead could be drastically reduced by processing vertex instructions on the CPU/GPU hybrid, then sending the data down through the PCIe bus to the add-on graphics card,

Wouldn't incorporating a CPU/GPU hybrid mostly eliminate the need for an external graphics card? Otherwise, this will only be a high end product. The added cost of a CPU/GPU hybrid processor and extra graphics card would really raise the price on an overall system. Maybe I misunderstood, I thought they were going after the integrated graphics market (lower end) with a more elegant and efficient solution. But then again, I'm not really a "computer" guy.

RE: Vertex Shaders
By Tlogic on 10/28/2006 7:54:45 AM , Rating: 2
"Wouldn't incorporating a CPU/GPU hybrid mostly eliminate the need for an external graphics card?"

Yes and they already exist in integrated chipsets. The problem is can you shut off the 'gpu' on the cpu if you want to add a stand alone graphics solution?

Lastly stand alone graphics solutions will always be superior simply because you cannot get dedicated ram and high bandwidth bus's on a cpu/gpu that is integrated, main memory speed and/or bandwidth would have to increase in exteme amounts to catch up to stand alone solutions.

RE: Vertex Shaders
By MarcLeFou on 10/25/2006 1:56:33 PM , Rating: 3
Actually what I find interesting about this concept is that you can have a basic GPU core integrated into the CPU core which would be sufficient for everyday business applications, for basic workstations, for business laptops and for barebones computers which should cut costs for over 75% of all systems sold.

But what I find really smart about this concept is that, with the Torenza initiative, the CPU will now be able to communicate directly through the Hypertransport link with a bunch of addon cards. Most people so far have envisonned putting in a second or third GPU but what I see happening is actually a breaking down of the components of the GPU into separate parts. Apart from the obvious idea of increasing VRAM through an add-on card, think about being able to customize your GPU according to you usage scenario with specialized shader cards, geometry cards, mhz boosts, etc.

This system would be the ultimate in customization and would be much more price efficient for customers who would be able to get exactly what they need. And instead of changing a whole GPU when a new tech comes out, you could just change that particular add-on card giving a much longer lifespan to your video card, hence your system. Imagine just being able to upgrade to shader model 5,0 (or whatever it is then) just by changing your 50$ shader card instead of your whole video card like we have to today!

Also, assuming the technical hurdles can be overcomed, AMD would be the only one for a few cycles with this tech, creating a totally new market a bit like Nintendo is trying to do with its Wii and taking total control of it by catching the competition off guard because it would take Intel at least a year to develop a competing product in the best case scenario. Disruption of an established market to gain the leadership in both CPU architecture and GPU add-on cards in one fell swoop. Quite a business strategy.

RE: Vertex Shaders
By tbtkorg on 10/25/2006 4:24:23 PM , Rating: 5
Interesting thread.

Integrating the GPU with the CPU is not all about graphics; it's about making the tremendous parallel processing power of the GPU available for general computation, including graphics. Admittedly, I cannot imagine all the different applications for such parallelism any more than you can. Scientific computation will use it, at least, but it goes far beyond that. The belief is that the general-purpose GPU is inherently, fundamentally such a sound concept that people like you and me will soon come up with a thousand creative ways to put it to work, given the chance.

Readers who have written assembly code or programmed microcontrollers will best understand the point I am trying to make, because at the lowest programming level, GPU programming differs radically from traditional CPU programming. The CPU is code-oriented; the GPU, data-oriented. Wherever the quantity of data is large and the parallel transformation to be applied en-masse to the data is relatively simple, the general-purpose GPU can, at least in theory, greatly outperform any traditional CPU. The CPU, of course, is far more flexible, and still offers by far the best way to chain sequential calculations together. The marriage of the CPU to a general-purpose GPU is thus a profound concept, indeed.

The general-purpose GPU is an idea whose time has come. By acquiring ATI, AMD makes a serious attempt to dominate the coming generation of computer technology, taking over Intel's accustomed role as pacesetter and standard bearer. Of course there is no reason to expect Intel to sleep through this transition. If Intel responds competently, as one assumes that it will, then we are in for some very interesting times in the coming few years, I think.

There is a third element, besides the CPU and the GPU, which will emerge soon to complement both, I think. This is the FPGA or field-programmable gate array. Close on the heels of the CPU-GPU marriage, the integration of the FPGA will make it a triumvirate, opening further capabilities to the user at modest additional cost.
AMD/ATI will not be able to ignore this development, even if their general-purpose GPU initiative succeeds, as I think it will. Interesting times are coming, indeed.

RE: Vertex Shaders
By Larso on 10/25/2006 5:01:56 PM , Rating: 2
The triumvirate system you outline is truly a very interesting concept. Seen from a hardware point of view it is all you can dream of: the CPU - incredibly optimized for sequential execution, the GPU - incredibly optimized for parallel execution and the FPGA - harness the power of custom logic to implement time critical operations and to implement never-though-of-before stuff.

As much as I would want to see this happen, I think that one should recall that hardware is only part of the game. The software aspect is just as important. I hope a solution can be found here, because its a big challenge. Software engineers needs to learn parallel processing techniques and they need to learn the co-design concepts such that they can utilize the FPGA - and they will need to ally with hardware engineers or learn a hardware description language themselves.

All these splendid hardware ideas will fall short if the software guys don't know exactly what they are dealing with and how to utilise it. Completely new programming paradigms might need to be conceived.

"I modded down, down, down, and the flames went higher." -- Sven Olsen
Related Articles

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki