backtop


Print 69 comment(s) - last by mindless1.. on Nov 1 at 3:12 PM

The future of CPU/GPU computing

With the completion of AMD’s acquisition of ATI, AMD has announced its working on new CPU/GPU silicon that integrates the CPU and graphics processor into a single unit. The upcoming silicon is currently codenamed Fusion and is expected in the late 2008 or early 2009 time frame. AMD claims Fusion will bring:

AMD intends to design Fusion processors to provide step-function increases in performance-per-watt relative to today’s CPU-only architectures, and to provide the best customer experience in a world increasingly reliant upon 3D graphics, digital media and high-performance computing. With Fusion processors, AMD will continue to promote an open platform and encourage companies throughout the ecosystem to create innovative new co-processing solutions aimed at further optimizing specific workloads. AMD-powered Fusion platforms will continue to fully support high-end discrete graphics, physics accelerators, and other PCI Express-based solutions to meet the ever-increasing needs of the most demanding enthusiast end-users.

AMD expects to integrate Fusion for all its product categories including laptops, desktops, workstation, servers and consumer electronics products. Judging by the inclusion of PCI Express support, it would appear the integrated GPU is more of a value solution—similar to Intel’s cancelled Timna processor. It is unknown if AMD will retain the current Athlon and Opteron names with the launch of Fusion. This isn't too surprising as AMD and ATI previously promised unified product development including vague mentions of hybrid CPU and GPU products. AMD also previously announced its Torrenza open architecture as well.

In addition to Fusion, AMD expects to ship integrated platforms with ATI chipsets in 2007. The platforms are expected empower commercial clients, notebooks, gaming and media computing. AMD expects users will benefit from greater battery life on the next-generation Turion platforms and greater enhancements with AMD Live! systems. DailyTech previously reported on ATI's chipset roadmap which outlined various integrated graphics and enthusiast products.

With the development of Fusion and upcoming integrated AMD platforms, it is unknown what will happen to NVIDIA’s chipset business, which currently relies mainly on AMD chipset sales.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

good workstation solution?
By Kim Leo on 10/25/2006 5:40:57 AM , Rating: 3
we all probably knew this was comming, i look forward to it but i don't hope it becomes all they produce, since it would only make the cpu more expensive for people who don't want to use the "onboard" solution.. but maybe it'l be a decent(spell?) gfx solution, the awsome thing is that you no longer have to have some crappy PCI card around in case of you'r primary one is broken..




RE: good workstation solution?
By pinadski on 10/25/2006 6:03:56 AM , Rating: 2
hope this would be good for gaming graphics & physics at a less price
and hope not to use 1000w power supply


RE: good workstation solution?
By Grated on 10/25/2006 6:07:18 AM , Rating: 2
These will most probably be cores that can be found in integrated chipsets first...
Don't expect miracles :)


RE: good workstation solution?
By leidegre on 10/25/2006 6:18:08 AM , Rating: 2
Even if the GPU itself isn't targeting high-end performance ATI's physics move, with 3 GPU two for graphics and 1 for physics, this could prove to be used to accelerate performance in applications.

Additional software/game manufactures might need to consider opening up thier applications for greater customization, since platform are starting to grow in complextity and we are seeing basically on new solution for everything.


RE: good workstation solution?
By Kim Leo on 10/25/2006 6:25:31 AM , Rating: 4
the thing is that it would need to use system ram wich you probably already know, is much slower than those on high end graphics cards today.


RE: good workstation solution?
By otispunkmeyer on 10/25/2006 7:20:57 AM , Rating: 2
its no different to current IGP's in that respect, but with DDR2 and DDR3 on the horizon system bandwidth will be rather decent and combined with AMD's on board memory controller (i presume they will keep this) the latencies will be much improved

of course it wont hold a candle to a discrete solution, but i bet it'll be more than enough for casual gamers and most 3d CAD packages etc

PS PLEASE FIX THIS STIE!!!

should make the system more refined, less heat, less cost etc


RE: good workstation solution?
By Targon on 10/25/2006 7:31:40 AM , Rating: 2
By the time Fusion comes out, I expect that we will see DDR-3 or perhaps even a quad-bank memory controller on the CPU to help with this.

There are also two seperate workstation markets out there, the low end and high end. For low end workstations, then this will be a great solution that will cut costs. For the high end, if the CPU/GPU can work in Crossfire mode, then we may see the CPU/GPU plus TWO crossfire enabled graphics cards for a total of three GPUs.


RE: good workstation solution?
By Nocturnal on 10/25/06, Rating: 0
RE: good workstation solution?
By Spivonious on 10/26/2006 4:50:59 PM , Rating: 4
I don't know because that would take them right out of the enthusiast market. Why would I upgrade my CPU everytime I wanted a new graphics card?


RE: good workstation solution?
By jp7189 on 10/25/2006 10:52:02 AM , Rating: 2
quote:
the thing is that it would need to use system ram wich you probably already know, is much slower than those on high end graphics cards today.


The path currently is: CPU -> chipset -> PCIe -> GPU

Having direct CPU/GPU communications will vastly improve any app that doesn't use large testures - the vast majority of users.


RE: good workstation solution?
By ogreslayer on 10/25/2006 2:57:24 PM , Rating: 2
You are forgetting that unless both have access to memory through the IMC; you still have at least one of those steps. No overhead would actually be removed as you are now flooding the HT interconnect. To top that off even if they go Quad-Channel by the time they start integrating, the bandwidth is not even close to what R600 is supposed to have. DDR3 and 4 are not gonna be any kind of salvation as all these offer only efficency, they are still DDR and all we will get is a speed bump. We can pray for lower latencies.

AMD is not a big enough player to be the one who starts the move to the next flavor of RAM. Intel has to do it and increasing the memory speed has proven to have marginal effects on Core 2 products when compare to Athlon64X2s. I wouldn't expect any real rush from Intel until we get the 1333FSB and they move to a 256-bit bus. This is gonna be integrated graphics level stuff. Unless Nvidia can pump out a CPU; Intel and AMD/ATI are gonna smother them out of the integrated market for desktops and notebooks.


RE: good workstation solution?
By Viditor on 10/29/2006 7:21:54 PM , Rating: 2
quote:
unless both have access to memory through the IMC; you still have at least one of those steps. No overhead would actually be removed as you are now flooding the HT interconnect

I don't think people yet understand the way Fusion will work in tandem with Torrenza yet...
Phil Hester explained a bit to TheReg in this interview:
http://www.reghardware.co.uk/2006/10/26/the_story_...
Notice that Fusion will be only a part of the modular design...
Very good article...


RE: good workstation solution?
By Rollomite on 10/25/2006 10:07:02 AM , Rating: 1
quote:
AMD intends to design Fusion processors to provide step-function increases in performance-per-watt relative to today’s CPU-only architectures


It would appear that they are going to try to keep power usage at the same level while incorporating this new technology.

Rollo


Vertex Shaders
By dunno99 on 10/25/2006 11:30:24 AM , Rating: 4
I think more credit should be given to this setup. This solution lends to the possibility of breaking up GPU processes instead of merging them. As in, in addition to taking a small chunk of unified shaders and put them on the CPU, and be able to output directly from that (which was given), AMD could also segregate the on-chip shaders to purely vertex processing and relegate the geometry and fragment shading to the graphics card. Due to the nature of the GPU, data feeding is pretty much one way from the vertex shader to the rasterizer and geometry/fragment shaders. This means that the CPU to GPU overhead could be drastically reduced by processing vertex instructions on the CPU/GPU hybrid, then sending the data down through the PCIe bus to the add-on graphics card, cutting driver processing time by even half, I would say (vertex data shouldn't take that much bandwidth, I don't think).




RE: Vertex Shaders
By wired009 on 10/25/2006 12:14:25 PM , Rating: 2
The Fusion solution seems convenient and beneficial at first glance but you have to wonder if it's practical to implement. Imagine a batch of new processors. Say I want high CPU performance but don't need superior graphics because I don't play games. Will there be fast CPU - low end GPU, fast CPU - mid range GPU, fast CPU - fast GPU variations so mainstream users and gamers have a choice? What happens during the next CPU refresh and the fast CPU is now the low end or midrange CPU? It will be hard for AMD to contine to offer a certain variation if it is no longer in high demand. This is where Fusion begins to look like a very cost ineffective solution for AMD. It makes a lot more sense to keep CPU and GPU separate for marketing reasons and to keep manufacturing lines efficient. It is more likely that computers will move towards removable socket GPUs that attach directly to the motherboard with the elimination of AGP/PCI slots than they are to move towards Fusion.


RE: Vertex Shaders
By NullSubroutine on 10/27/2006 7:43:00 AM , Rating: 1
it actually makes sense if you consider if u need more graphical horsepower you could slap in an amd made (or nvidia) gpu in the torenzza's 'accelerator' socket.


RE: Vertex Shaders
By sdsdv10 on 10/25/2006 12:43:58 PM , Rating: 2
quote:
This means that the CPU to GPU overhead could be drastically reduced by processing vertex instructions on the CPU/GPU hybrid, then sending the data down through the PCIe bus to the add-on graphics card,


Wouldn't incorporating a CPU/GPU hybrid mostly eliminate the need for an external graphics card? Otherwise, this will only be a high end product. The added cost of a CPU/GPU hybrid processor and extra graphics card would really raise the price on an overall system. Maybe I misunderstood, I thought they were going after the integrated graphics market (lower end) with a more elegant and efficient solution. But then again, I'm not really a "computer" guy.


RE: Vertex Shaders
By Tlogic on 10/28/2006 7:54:45 AM , Rating: 2
"Wouldn't incorporating a CPU/GPU hybrid mostly eliminate the need for an external graphics card?"

Yes and they already exist in integrated chipsets. The problem is can you shut off the 'gpu' on the cpu if you want to add a stand alone graphics solution?

Lastly stand alone graphics solutions will always be superior simply because you cannot get dedicated ram and high bandwidth bus's on a cpu/gpu that is integrated, main memory speed and/or bandwidth would have to increase in exteme amounts to catch up to stand alone solutions.


RE: Vertex Shaders
By MarcLeFou on 10/25/2006 1:56:33 PM , Rating: 3
Actually what I find interesting about this concept is that you can have a basic GPU core integrated into the CPU core which would be sufficient for everyday business applications, for basic workstations, for business laptops and for barebones computers which should cut costs for over 75% of all systems sold.

But what I find really smart about this concept is that, with the Torenza initiative, the CPU will now be able to communicate directly through the Hypertransport link with a bunch of addon cards. Most people so far have envisonned putting in a second or third GPU but what I see happening is actually a breaking down of the components of the GPU into separate parts. Apart from the obvious idea of increasing VRAM through an add-on card, think about being able to customize your GPU according to you usage scenario with specialized shader cards, geometry cards, mhz boosts, etc.

This system would be the ultimate in customization and would be much more price efficient for customers who would be able to get exactly what they need. And instead of changing a whole GPU when a new tech comes out, you could just change that particular add-on card giving a much longer lifespan to your video card, hence your system. Imagine just being able to upgrade to shader model 5,0 (or whatever it is then) just by changing your 50$ shader card instead of your whole video card like we have to today!

Also, assuming the technical hurdles can be overcomed, AMD would be the only one for a few cycles with this tech, creating a totally new market a bit like Nintendo is trying to do with its Wii and taking total control of it by catching the competition off guard because it would take Intel at least a year to develop a competing product in the best case scenario. Disruption of an established market to gain the leadership in both CPU architecture and GPU add-on cards in one fell swoop. Quite a business strategy.


RE: Vertex Shaders
By tbtkorg on 10/25/2006 4:24:23 PM , Rating: 5
Interesting thread.

Integrating the GPU with the CPU is not all about graphics; it's about making the tremendous parallel processing power of the GPU available for general computation, including graphics. Admittedly, I cannot imagine all the different applications for such parallelism any more than you can. Scientific computation will use it, at least, but it goes far beyond that. The belief is that the general-purpose GPU is inherently, fundamentally such a sound concept that people like you and me will soon come up with a thousand creative ways to put it to work, given the chance.

Readers who have written assembly code or programmed microcontrollers will best understand the point I am trying to make, because at the lowest programming level, GPU programming differs radically from traditional CPU programming. The CPU is code-oriented; the GPU, data-oriented. Wherever the quantity of data is large and the parallel transformation to be applied en-masse to the data is relatively simple, the general-purpose GPU can, at least in theory, greatly outperform any traditional CPU. The CPU, of course, is far more flexible, and still offers by far the best way to chain sequential calculations together. The marriage of the CPU to a general-purpose GPU is thus a profound concept, indeed.

The general-purpose GPU is an idea whose time has come. By acquiring ATI, AMD makes a serious attempt to dominate the coming generation of computer technology, taking over Intel's accustomed role as pacesetter and standard bearer. Of course there is no reason to expect Intel to sleep through this transition. If Intel responds competently, as one assumes that it will, then we are in for some very interesting times in the coming few years, I think.

There is a third element, besides the CPU and the GPU, which will emerge soon to complement both, I think. This is the FPGA or field-programmable gate array. Close on the heels of the CPU-GPU marriage, the integration of the FPGA will make it a triumvirate, opening further capabilities to the user at modest additional cost.
AMD/ATI will not be able to ignore this development, even if their general-purpose GPU initiative succeeds, as I think it will. Interesting times are coming, indeed.


RE: Vertex Shaders
By Larso on 10/25/2006 5:01:56 PM , Rating: 2
The triumvirate system you outline is truly a very interesting concept. Seen from a hardware point of view it is all you can dream of: the CPU - incredibly optimized for sequential execution, the GPU - incredibly optimized for parallel execution and the FPGA - harness the power of custom logic to implement time critical operations and to implement never-though-of-before stuff.

As much as I would want to see this happen, I think that one should recall that hardware is only part of the game. The software aspect is just as important. I hope a solution can be found here, because its a big challenge. Software engineers needs to learn parallel processing techniques and they need to learn the co-design concepts such that they can utilize the FPGA - and they will need to ally with hardware engineers or learn a hardware description language themselves.

All these splendid hardware ideas will fall short if the software guys don't know exactly what they are dealing with and how to utilise it. Completely new programming paradigms might need to be conceived.


Linux Drivers
By banana989 on 10/25/2006 10:13:52 AM , Rating: 3
I've always liked AMD and I am sure that they will produce a great product. It may not be suitable for gamers, but who really plays hardcore games on a laptop anyways? What I am really hoping for is that they will push ATI to support Linux as well as Nvidia does.




RE: Linux Drivers
By peternelson on 10/25/2006 12:56:41 PM , Rating: 2
Yeah I have both ATI and Nvidia graphics cards.

However for 2 gpus that are comparable in windows, under linux the nvidia one runs about twice as fast because ATI drivers for linux are lame.

I'm glad AMD likes linux, and agree that AMD should apply some pressure to ATI to get their act together on optimising drivers.

Linux is increasingly important and gaining market share (even before vista launches) so until ATI drivers improve, my next gpus will be Nvidia.


RE: Linux Drivers
By bersl2 on 10/25/2006 3:40:44 PM , Rating: 3
CPUs have had open instruction set architectures forever. (How else was one to use the device?!) Will AMD be lame like almost everybody else and not let us have the information necessary to make open drivers for the graphics portion, so that we will have to reverse-engineer this architecture as well?

If the gamers want their drivers optimized with secret and/or patented algorithms out the wazoo, that's just great for them. But some of us don't want closed drivers, if at all possible. Please, can we have some documentation?---and not under NDA; that doesn't count.


Cold Fusion!
By GoatMonkey on 10/25/2006 8:11:31 AM , Rating: 5
All you need is a peltier!




RE: Cold Fusion!
By Goty on 10/25/2006 10:33:49 AM , Rating: 2
Ok, that made me chuckle a little.


Nvidia / Intel
By shamgar03 on 10/25/2006 9:49:04 AM , Rating: 2
The only thing that worries me about this, is that it could push intel and nvidia closer. I would rather nvidia stay independent so that we get more graphics competition. Hopefully AMD can keep up with intel until this actually comes out, because they are a bit behind right now.




RE: Nvidia / Intel
By Lazarus Dark on 10/25/2006 11:24:36 AM , Rating: 3
well, they all copy each other and then try and oneup the competition so expect a similar announcement from intel by next idf.

I welcome it. especially if it can be multipurpose. I dont game that much and mostly need a gpu for high def h.264 hardware acceleration, which honestly is a waste of most of the video card.

don't forget windows new display driver. even if it just runs windows, an integrated gpu makes sense. we could get rid of the current north/south bridge crap making for smaller, less complex, cheaper mobos, especially as pci and ide become unnecessary over the next few years. It will also increase upgradability, want more gpu?-add a pcie card. just need a little more?-maybe some system ram would help the integrated gpu. maybe sli/crossfire will work across integrated to pcie or the integrated could be used for physics?

with the move to 45nm and smaller this is the perfect time to start working on integrated gpu cores.


Possibility of a smarter implementation?
By RyanHirst on 10/25/2006 4:08:05 PM , Rating: 3
I don't see an on-chip "integrated" graphics solution as being the end-goal of this venture. AMD doesn't have enough to gain. Why would they deliberately drive up the price of their processors (and production costs)?

I think these points:
1) Instruction sets for graphics processors are standardized.
2) Graphics processors are inherently massively parallel architectures.
3) The new generation of games are multithreaded. Future games will only be more elaborately and efficiently multithreaded. With four-core chips around the corner, you can't afford to be releasing a game engine that uses only 1/4 of the processing power. Not when Alan Wake can dedicate a whole core just to the physics of a tornado.
And
4)** Windows Vista removes the separation layer for hardware. ***

Lead logically to:
The inclusion of on-chip 3-D instruction sets which will not only be sufficient for the "integrated" market, but which will also run in parallel with, and compliment, discrete graphics. If you have 3D instruction sets available on the CPU there is no reason inherently parallel loads like shaders cannot be sent to multiple destinations, not just WITHIN a given piece of hardware, but between pieces of hardware. Plus, by owning ATI, AMD could gain leverage with game companies to provide a 3D load in which a large number of operations which require a great deal of computation but very little RAM (say, the size that could fit into a small, on-chip cache) can be routed independently of other 3D data (if this is even necessary under DX10/Vista).




By RyanHirst on 10/25/2006 4:09:57 PM , Rating: 2
Hahaha, DUNNO you already said this!
Sorry, the whole list of posts didn't appear the first time I looked through this thread.


Cyrix Redux
By nah on 10/25/2006 9:12:30 AM , Rating: 2
reminds me of the Cyrix Media GX--the success of which chip destroyed a company--Cyrix--when it was bought and mismanaged into oblivion by National Semiconductor




RE: Cyrix Redux
By stmok on 10/25/2006 12:08:48 PM , Rating: 2
Isn't the VIA C3/C7 series of CPUs partly based on the old Cyrix chip?


!
By Scabies on 10/25/2006 9:54:48 AM , Rating: 2
Heat poisoning. I'm going to coin that phrase now. Wouldnt there be issues with the same 4in^2 piece of silicon sharing two entirely different complex high performance processors? I mean, overclocking one will overheat the other. That problem will probably point to one of the solutions made above, either that A this will be a budget and mass market solution, or B, we may see an AM-G socket for onboard GPU.

that aside, could they utilize a PCIx-16 slot for a weird VRAM card? Would you get better bandwidth between the PCIx slot - CPU than you would between the CPU - mobo RAM?




RE: !
By ADDAvenger on 10/26/2006 1:47:25 AM , Rating: 2
The same could be said for multi-core systems. They've figured out how to have several powerful CPUs in a package, why couldn't they do the same about several CPUs and a GPU or two? And don't say GPUs are hotter, Netburst was an inferno but now we have very cool processors all around. The same will happen for GPUs, though if I had to guess I'd say it won't until ATi is on their 3xxx series and nVidia's on their 9xxx series.


Nice project
By pinkpanther6800 on 10/25/2006 11:31:34 AM , Rating: 2
There is no doubt what amd/ati is going to in the future.

We will see chips with both cpu/gpu combined into one chip. A lot of different chips with different gpus and cpus will be the buyers option. how big do you want your gpu do be and the cpu on the chip. when you upgrade you get a whole new chip including the cpu and gpu.

About the lack in power, well dont even compare it to onboard graphics...thats a lol.

This is two powerhouses combined in close connection. Is certain that you cant compare crappy intel onboard with this....This is the next step in the computer progress.

If you have a quad core or 6-8 core machine in 2008 or 2009 means you have also the same cores in graphics. It will be very very strong.




RE: Nice project
By ZmaxDP on 10/26/2006 2:15:17 PM , Rating: 2
I don't think that you've nailed it on the head anymore than I have. In most scenarios, if you take to extremes and find a middle ground then you're a lot closer to the truth. I think we will continue to see some discrete solutions for graphics far into the future, and likely some completely integrated solutions very soon. However, while there is some redundancy in the two discrete solutions, there isn't enough that simply combining the two will lead to either much lower power usage or even a feasable chip design.

Instead, I think that AMD is really looking to re-define what a CPU and GPU should do. I think a lot of the processing-heavy tasks of the GPU will migrate to the CPU and most of the (currently) graphics only functions will stay put on a GPU that is much reduced in size, complexity, and power consumption.

There are just too many downsides to the consumer with making all GPU and CPU functions exist on one chip in the Mid to High range graphics markets.

No, the money is in making a very fast and wide processing core using the parallel processing capabilities currently on the GPU and a small and efficient integrated graphics core that can act as the GPU for 98% of the market and as a pre-processing core for the other 2%.

My two cents...


Integrated...
By fxyefx on 10/26/2006 2:37:08 AM , Rating: 2
I wonder how long it will be until there are entire computer systems... storage, memory, chipset... integrated into one piece of silicon. Or are there some of those already?




RE: Integrated...
By Ralph The Magician on 10/26/2006 1:17:36 PM , Rating: 2
The problem is that doesn't really make sense with the way technology changes so fast. You end up with a very, very, long development lifecycle.

We've kind of seen this with AMD already, having the memory controller built-in. While it has some advantages, it also have some big disadvantages. They have to rework the processor everytime they want to make a change.

When you have a modular design, you can step things up in increments, and customize the architecture to work on different platforms for different uses. When you want to upgrade a chipset to say, support a new standard, you don't have to start over.

Imagine if the CPU, GPU, Memory controller and chipset where all on a single piece of silicon. You'd be lucky to see a refresh once a year that really had any impact. By the time the refresh would make it to market, it would already be outdone, at least in part, by Intel with their more modular design.

Look at how they are doing things with Core. They can update the CPU, then a few months later, upgrade the northbridge to allow for a faster FSB, and then a month down the road introduce the new Intel GMA 3000 IGP. You don't have to wait until all three are done, then start working them together, then test them, an then eventually get them to market and hope that the speed combinations that you've created are actually those that people want/need.


All in one systems are the target
By whymeintrouble on 10/26/2006 6:22:57 PM , Rating: 2
This is meant to overtake the HTPC/All-in-one market. this is an all in one package, inexpensive but yet very useful. means also it can be presented in a Micro ATX package or any other SFF they choose and with a very limited amount of noise. Basically, they are trying to best the Inte VIIV program... we'll see what comes of it.




By Randalllind on 10/27/2006 10:06:34 AM , Rating: 2
Maybe it will not use a lot of memory from systems. Having the GPU there would require less memory cause the road it has to travel would be short compare to your normal video card.

Quad core does that mean we will 4 gpu's? LOL


R.I.P. Mark Rein
By Pirks on 10/25/2006 12:21:18 PM , Rating: 3
I heard he shot himself after reading this ;)




AMD is in confusion
By Eris23007 on 10/25/2006 1:24:35 PM , Rating: 1
Doesn't anyone remember the recent hoopla over Torrenza? Torrenza and Fusion are essentially diametrically opposed strategies. I could see using both strategies for different ends of the market, but this release makes it seem as if they're planning to expand fusion across their entire product line.

I think AMD hasn't figured out exactly what the heck direction they want to go yet, and this is simply evidence of the difficulties that result anytime two large companies with distinct cultures merge - it can take years to recover, if ever (HP anyone?)...




RE: AMD is in confusion
By ZmaxDP on 10/26/2006 2:05:20 PM , Rating: 2
I don't think so at all. Admittedly, Torrenza is basically a communication tech between discrete hardwares, and Fusion is a move to combine previously discrete hardwares together. At this simple level, sure they're diametrically opposed. That doesn't neccisarily mean that AMD is confused, or that the results of the two initiatives are at odds.

A comparable analogy is that parallel processing and ramping the frequency of a processor are diametrically opposed strategies. However, that doesn't mean that ramping the frequency on a highly parallel processor gets you nothing.

Quite often, combining two very different strategies is a highly effecetive move. It allows you flexibility in the market, and often the two can compliment eachother and get you more than either strategy would alone.

In this instance, lets assume that AMD is attempting to move all the functions of a GPU into the CPU. (I don't think this is the case except on the low end IGP side, but let's assume anyway). What do you gain with said "Fusion." Faster communication between the GPU and CPU, both ways. Increased processor performnace on come tasks (folding anyone? or physics for that matter). What do you loose? Upgrade flexibility, cost effectiveness perhaps (Though not having discrete memory sources might decrease some costs across the board)

Let's assume AMD chooses Torrenza instead for GPU's? You get similar though smaller gains in communication (marginally smaller). You loose some of the performance advantages from co-processing, though not all. And, you keep the flexibility and cost effectiveness of the previous generation (which isn't all that great when you think about it).

What about combining the two "diametrically opposed" strategies? Well, you could take some of the GPU's functions, namely the shader units and such that can massively improve the perfomance of a CPU in some areas. This would allow you to replace some of the processing functions of the CPU as well thus resulting in lower transistor counts than two discrete solutions at least. You could also NOT integrate other parts of the GPU that are really graphics specific and can't benefit other apps very much at this point and that are only needed for very graphic-intense applications. These could still be sold as Torrenza add on cards and have much lower power consumption and costs because of the reduced complexity of what is onboard. So, the Fusion CPU is then scalable for graphics. You have all the processing benefits of the current GPU on the CPU, and the ability to purchase the graphics add-ons seperately to meet your needs. Likely, the Fusion CPU would even have basic IGP functionality so you wouldn't need to have any add-on card unless you wanted it. Sure, it would decrease the upgrade flexibility a bit (in terms of the processing side of the graphics architecture). But, the ATI side of AMD could focus on making improvements in the add-on cards for the 3 to 6 month cycle, and processing level improvements on a yearly cycle. Not that bad of a scenario methinks.

In otherwords, AMD is going for a best of both worlds strategy, or a have your cake and eat it too strategy. Not too bad of an idea. I think AMD knows exactly where they are going with this one, I just don't think WE do.


By kokal on 10/25/2006 3:15:38 PM , Rating: 2
I am very eager to see what they make of this fusion. At the moment I don't like the ever growing need for more power from the PSU. I aint rich and I don't like the idea of 500$+ bill for electricity per month if I had 1000+ W PSU. At the moment I have 2 PCs at home both using 350 W PSU. I am a casual gamer, I watch movies and the second pc is for my sister. I would be very pleased it they make something out of this and work on efficiency rather than building faster GPUs/CPUs.

What I hope to see is they make low-end, mid-range and high-end integrated graphics and keep the flexibility - like for instance if they make a socket in the CPU itself that you would use to plug the additional add on video card so that you can have different combos like - you buy a mid-range processor and low-end graphics and have the possibility of upgrading the video by unpluging the old and pluging a better solution and selling the old one thus minimizing cost. Also it could be possible (in my head) for the CPU to work without the additional GPU in socket thus enabling you to use PCI-e for better graphics solution - Single or SLI/Crossfire or even 3 GPUs and stuff like that. Well these are just my thoughts but I am looking forward to the future of integrated solutions.





By Ralph The Magician on 10/26/2006 1:20:37 PM , Rating: 1
IMO, Apple has the right idea in terms of power usage. The 24" iMac has a maximum power draw of 220W.


Workable Fusion
By crystal clear on 10/25/2006 7:44:44 AM , Rating: 2
For such a Fusion,I wonder what solutions will be available
for -Cooling ,Noise reduction ,PSU ?




Something to look forward to...
By clayclws on 10/25/2006 8:11:56 AM , Rating: 2
I am not too sure about what they are proposing. Hopefully it turns the GPU into something that slots into the motherboard like CPU. That way, you can always change just the chip of the GPU and add/change the RAM to it whenever you want to upgrade.

Anyways...something to look forward to...




Upgradability?
By Aikouka on 10/25/2006 8:57:23 AM , Rating: 2
Is that even a word... anyway, as more things congeal onto one discrete unit, you face the problem of upgradability. This doesn't affect everyone, but mainly those die-hard people who would trade in last years model of a car for the current one because it has 5 HP over the prior model. I personally don't want to pay $1600-$2000 for a top-of-the-line processor (the number made from adding up a typical $1000 price tag of the top FX/EE processors and add an extra $1000 for a dual-gpu solution if available). Then if I want to upgrade in another year, I'm forced to pay the money again. Although this doesn't affect nVidia, one reason why people like features like SLi is the whole "hey, in a year I can buy another one." Unless this solution is more like Torrenza than complete integration, we're going to lose quite a bit of options when it comes to upgrading, unless you practically want to buy a whole new computer =\.




GPU performance
By ajfink on 10/25/2006 10:38:26 AM , Rating: 2
Keep in mind that GPUs have a lot of processing performance to throw around on their own (thus, new F@H client for x1900~ GPUs), and if an integrated GPU core is used purely for its processing power it could mean a substantial boost in performance in certain programs other than gaming (let's face it, high-end graphics won't be integrated with a CPU anytime "soon," so the best use of an integrated GPU core on a high-end system would be for boosting performance elsewhere, phyics, math, etc.).




Profit vs Performance
By othercents on 10/25/2006 11:38:51 AM , Rating: 2
This is actually a very good move for AMD since right now Intel is the GPU king. They sell more GPUs than all the other manufacturers combined. Granted the GPUs are not worth the silicon they are printed on, but if AMD can break into this market with a low cost and low power solution then they will have an opportunity to increase their market share.

The majority of the laptops sold today are IGP solutions. I think only 2% at most are non IGP solutions. AMD has a very large untapped market that they have an opportunity to grab just like they did with the server market. The only nice thing is that ATI knows how to make video cards and hopefully these new Fusion processors will be able to run DX10 without any major issues compared to the Intel solution.

I doubt that this will be the only processor that AMD creates since they are definitely an enthusiast company. It might be interesting to see if the graphics cards would get faster by adding some processing power to your normal GPU.

Other




The Fusion power
By Janooo on 10/25/2006 12:36:00 PM , Rating: 2
If the Fusion is strong enough and it proves itself then Fusion 2/3 could be inside the next XBox or PS4. You never know.




CPU / GPu
By KingofL337 on 10/25/2006 2:45:11 PM , Rating: 2
I don't think you guys are looking at this the way it its going to unfold. I don't think the GPU in the CPU will be rendering all the graphics. Aka loading a Display driver for your CPU. I think its just going to be another core that will be used in number crunching.

Lets use the X300 intergrated Chipset / GPU. Right now it's pretty much the sux. Not as bad as intel GMA but still pretty bad. Now with two cores the X300 will run in a crossfire ish setup where some of the processing can be done by the Fusion CPU. But the frame buffer will still be controlled by the X300.

I'm not sure how many remember the reason we went to dedicated graphics cards. It was basicly the CPU wasn't designed to properly process the specialized tasks needed for high speed rendering. With ATI's GPU the X86 will probably take a back seat, the raw horsepower of the GPU
will shine. Look at the folding at home project, they said the GPU is 30 times more efficient at processing data then a traditional CPU.

Its going to be interesting no doubt.




VERY RISKY ADVENTURE
By crystal clear on 10/26/2006 12:11:17 PM , Rating: 2
"Developing an entirely new programmable GPU core in-house is a risky endeavour, Hester admits. It's an expensive process and if you get it wrong, you weaken your entire processor proposition when CPU and GPU are as closely tied as the Fusion architecture mandates."

http://www.reghardware.co.uk/2006/10/26/the_story_...

Its not so easily said & done.
AMD should be asking itself WHAT IF??? THEN (WHAT)
if things dont go as per plan/expectations.




Logical move...watch out nvidia
By lagrander on 10/28/2006 12:08:15 PM , Rating: 2
With the advent of 'secure' HDDVD datapaths and encrypted video content, combining the CPU and GPU on the same chip is a logical move. This will improve DRM performance and facilitate it's implementation.




Less Price/Workstation?
By Dfere on 10/25/06, Rating: -1
RE: Less Price/Workstation?
By killerroach on 10/25/2006 9:35:28 AM , Rating: 3
You so sure on that? We don't know what the purpose of these potential SoC (System-on-Chip) solutions are (at least, with moving everything onto one die, that's what the obvious path that this is going to would be). Typically, SoC designs have not catered to the bleeding edge, and I'm sure this one probably won't either. However, these do have some interesting applications for thin clients, embedded systems (i.e. cell phones and PDAs), workstations, and budget to mid-range consumer PCs.


RE: Less Price/Workstation?
By Dfere on 10/25/2006 1:51:54 PM , Rating: 2
I am sure enough to bet on the following.

1) When Fusion comes out, it is not a “low end” only solution comparable to current day IGP.
2) That the product satisfies all primary functions of today CPU and GPU. Secondary functions such as physics processing, hypertransport type bus support/links etc may well be merged, but I am talking about current basic operating functions.
3) That comparable to Intel offerings, there is no benefit price/performance wise over the product life of the computer it is installed in.


This is a business prediction, not technology (though the two are hugely interrelated in this industry). AMD is expecting increased market share due to this and is already planning on exploiting it. You would not make an announcement targeting the “synergy” of these two companies by going after the low end market. Granted there is a lot of volume and reliable profit, but ATI already has the lion’s share of this.

This is about increased profit margins due to market share.

What is the wager?


By Fanon on 10/25/2006 11:17:55 AM , Rating: 2
That's an idea. What's stopping nVidia entering the CPU race? I'm not suggesting that they should, as that's yet another socket, architecture, etc to remember - but they're invovled with everything else... why not the CPU?


By mindless1 on 11/1/2006 3:12:48 PM , Rating: 2
Remember that lower end systems sell in highest volumes, and most people dont' upgrade their CPU or video on an OEM box. They are becoming insulated from issues of CPU socket or architecture and still see same things as always, # of features, marketing buzzwords, frequency, and cost. In other words, more integration inevitably means the most common tasks are supported by a cheaper to build system.


By dwalton on 10/25/2006 11:44:34 AM , Rating: 4
What's the transistor budget for an high end GPU from ATI? 384 million transistors is a lot to add to a CPU and we are talking current gen and not R600 which is rumored to be 500 million+. AMD will probably reduce transistor count with better custom logic but adding that much real estate has to kill yields and add cost.

Plus, neither Nvidia nor ATI sell enough high end GPUs to persuade AMD to commit a production line to a CPU/GPU that at most will sell a few hundred thousand chips if that. Furthermore, AMD would have to be retooled thier production line every 6 months to deal with refreshes.

A CPU/GPU with IGP performance makes sense due to fact there is market of millions for such chips versus a market of thousands.


By dwalton on 10/27/2006 1:19:30 PM , Rating: 2
You're talking high margins and I'm talking high volume. Can you imagine AMD selling a FX 80 for $1600.00 and still maintain volume sales of a either a highend standalone GPU or CPU.

$1600 is pretty forgiving once you take account that margins on GPU arent that great at start of production and then having to deal with yield of a cpu that has multiple cores.


By Ulfhednar on 10/25/2006 12:46:32 PM , Rating: 3
quote:
Expect Nvidia to try and buy an X86 CPU license or they will be left out in the cold.
They already did.


By Russell on 10/25/2006 12:55:56 PM , Rating: 2
What? When? Link?


By sdsdv10 on 10/25/2006 3:21:29 PM , Rating: 2
This is currently just at the rumor stage, but here is one link to Engadget referencing an Inquirer article. There are others.

http://www.engadget.com/2006/10/19/nvidia-has-x86-...

Once again, Google is your friend (just type in "nvidia x86"!


By Viditor on 10/27/2006 8:53:19 PM , Rating: 2
quote:
They already did

They are developing, but they haven't bought the x86 license yet...


By ceefka on 10/27/2006 3:18:33 PM , Rating: 2
Perhaps nVidia is interested in partnering/collaborating with a company like Xilinx. They can make GPU's themselves and FPGA's is a Xylinx's field. They would still be a very interesting partner for AMD or even Intel if they have their version of Torrenza.


By fumar on 10/31/2006 12:44:58 AM , Rating: 2
This would work well with the integrated graphics market, also known as the normal user. But AMD would still need to make regular CPUs to accommodate the high end home user, the enthusiast, high end professional users, and server usage. You don't need 4 CPUs/GPUs in a server.

I haven't seen AMD's roadmap in awhile, but is it safe to assume fusion will come out when 45nm AMD parts are released?


"Well, there may be a reason why they call them 'Mac' trucks! Windows machines will not be trucks." -- Microsoft CEO Steve Ballmer

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki