backtop


Print 37 comment(s) - last by Trisped.. on Nov 21 at 3:42 PM

CPU and GPU all in one to deliver the best performance-per-watt-per-dollar

AMD today during its analyst’s day conference call unveiled more details of its next-generation Fusion CPU and GPU hybrid. Early mentions of Fusion first appeared shortly after AMD’s acquisition of ATI Technologies was completed a few months ago. AMD is expected to debut its first Fusion processor in the late 2008 to early 2009 timeframe.

AMD claims: “Fusion-based processors will be designed to provide step-function increases in performance-per-watt-per-dollar over today’s CPU-only architectures, and provide the best customer experience in a world increasingly reliant upon 3D graphics, digital media and high performance computing.”

The GPU and CPU appear to be separate cores on a single die according to early diagrams of AMD’s Fusion architecture. CPU functionality will have access to its own cache while GPU functionality will have access to its own buffers. Joining together the CPU and GPU is a crossbar and integrated memory controller. Everything is connected via HyperTransport links. From there the Fusion processor will have direct access to system memory that appears to be shared between the CPU and GPU. It doesn’t appear the graphics functionality will have its own frame buffer.

While Fusion is a hybrid CPU and GPU architecture, AMD will continue to produce discrete graphics solutions. AMD still believes there’s a need for discrete graphics cards for high end users and physics processing.

Also mentioned during the conference call is AMD’s new branding scheme regarding ATI products. Under the new branding scheme, chipsets for Intel processors and graphics cards will continue on with the ATI brand name. ATI designed chipsets designed for AMD platforms will be branded under AMD as previously reported.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By Connoisseur on 11/17/2006 10:34:15 AM , Rating: 2
Because they're essentially combining the cpu and gpu, it'll no longer be possible to buy one and upgrade the other later. Granted, from what the article says, there will still be discrete high end solutions but essentially, it looks like they're turning the PC into a console.




By dagamer34 on 11/17/2006 11:11:19 AM , Rating: 2
"Fusion" is meant for laptops, not desktops. Notice in the article that they said that they would still be making discreet graphics cards. The primary advantage of a "Fusion" CPU would be lower power consumption and increased performance, both of which don't carry over as well when talking about a desktop instead of a desktop.


By othercents on 11/17/2006 11:44:13 AM , Rating: 5
Actually "fusion" could be both laptop and desktop, but not for high end systems. About 80% of the desktop market is low end system with integrated graphics. AMD is trying to build those systems for cheaper and have a lower power utilization. These systems would be good for Home Entertainment boxes that could be left on all the time to run as DVRs and such.

Other


By Targon on 11/17/2006 2:20:36 PM , Rating: 3
Even for high end systems, with Crossfire, we could see even a low end graphics processor(as built into the fusion) boost the graphics performance of a system.

In theory, Crossfire should allow a Radeon X1950 and X300 to work together. Obviously the X300 won't add all that much compared to the X1950, but it should help a little. Picture a system with two high end video cards with some extra graphics processing provided by the on-die GPU.


By Pirks on 11/17/2006 3:06:39 PM , Rating: 3
i second that - as long as fusion on-die gpu is generic enough, has no fixed pipeline, is DX10 only and can be employed for vector/parallel calculation code like G80 today - this is some unduckingbelievable speedup for the cpus. the physics simulation on integrated gpu shaders? it's just the beginning. now we have two/four genric cpu cores - but then with 45nm we'll have some superduper version of cell with not just puny 7 SPEs - that's gonna be some serious stuff - folding at home and rc5 will suddenly have nothing to do after a couple of years - coz the speedup in vector fp code will be not even _orders_ of magnitude.

right, so this will be specialized cores scheme, not a bunch of generic cores like now, but this is actually good - see how nicely stuff is done today when we have specialized gpus and generic cpus working together? this cell idea is ripe and ready, sony is outdated now by the graphics with G80, but with fusion sony's cell will be total nothing - 7 puny SPEs versus ATI integrated DX10 massively parallel shader engine? ha ha ha ha [demonic laughter here]


By crazydrummer4562 on 11/17/2006 10:08:11 PM , Rating: 2
It would actually subtract a lot of performance, because both cards need to have the same pixel pipeline count, so the X1950 would be reduced to the same performance as the X300...rendering that completely pointless and a monumental waste of money.


By Trisped on 11/21/2006 3:42:10 PM , Rating: 2
No, it would depend on the cards crossfire compatibility and the rendering mode employed by the software in question.

Remember, the first Crossfire cards were not clocked the same as their companion cards, so ATI launched with a way of dividing the work up between the too cards based on how much could be done by each.

Still, I think that the differences between the x1300, x1600, and x1900 cards is enough to make them incompatable in crossfire. As a result, if they are sticking a 300 line card on the CPU, then you will probably have to get a matching 300 line card if you are going to run Crossfire. I doubt anyone would do that though, as these are meant to replace integrated graphics. I am sure for the extra cost added by a fusion chip and the cost of a 300series video card you could buy an add in card that was more powerful, and useful.


By OrSin on 11/17/2006 11:45:25 AM , Rating: 2
Actaully it meant for both. It will used more in business class desktops then euthunistic class mother boards. But my guess AMD will only make 1 Chipset for bothe business and games. Then you can decide if you want 4 core CPU or a 2 core CPU/GPU. Really by then physics might be big and even gamers might choose the 2 core CPU/GPU over a 4 core system. Amd will al ready have DDR3 in systems by 2008 so maybe the system memery will be fast eonough for GPU (not gaming speed but physics or desktop 3d)


By Pirks on 11/17/2006 3:10:15 PM , Rating: 2
no, they won't make a graphics core a generic x86 one, this will kill the idea. forget about switching between 2/4 cores - but it doesn't matter anyway - just run specialized physics or whatever code on integrated gpu like it's done in G80 and get your 10x/100x speedup on that. with dicrete graphics card you can do this - run AI on a half of the gpu shaders and physics on the other half while cpu sits there doing nothing :)


By Kuroyama on 11/17/2006 11:57:19 AM , Rating: 4
quote:
it'll no longer be possible to buy one and upgrade the other later.


Why do you say that? I can buy a motherboard with integrated graphics and still upgrade with a discrete graphics card later. I would imagine that this'll replace integrated graphics chips, but you'll probably still be able to add in a card later.


By MDme on 11/17/2006 4:44:08 PM , Rating: 2
in the event they change graphics slots in the future (e.g. AGP -> PCIE) it will also provide you with the ability to let's say....

upgrade your system by changing the mobo only (and keep the CPU) while you wait for a new videocard for the new graphics socket. this is something a lot of people stuck with high-end AGP graphics are stuck in because of the aggregate cost of a CPU/videocard/mobo/memory upgrade.


Physics acceleration
By Myrandex on 11/17/2006 11:06:23 AM , Rating: 3
It would be sweet if that could serve as a physics accelerator in a system with a discrete GPU.




RE: Physics acceleration
By casket on 11/17/06, Rating: 0
RE: Physics acceleration
By SexyK on 11/17/2006 1:34:09 PM , Rating: 2
That's the thing, everyone assumes that for some reason this will be bringing "high speed cache" to GPUs, but in reality anything on-die or on-package is going to be a HUGE step back from discrete in this regard.

For example, the bandwidth of the L2 cache in a Core 2 Extreme is ~25GB/s absolutely maxed out, and it's only 4MB. On the other hand, the GDDR3 on an 8800GTX offers 768MB of graphics memory with almost 87GB/s of bandwidth. It's not even close.

I will believe in on-die GPUs when I see them even coming close to discrete graphics. For the value segment, this could work out though.


RE: Physics acceleration
By MonkeyPaw on 11/17/2006 3:07:43 PM , Rating: 2
There's more to it than just bandwidth. Where an on-die GPU will depart from traditional graphics is because the IGU will be communicating directly with the CPU on each clock cycle. By accessing the L2 cache, the latencies will be considerably lower than by going through a NB or a PCIe tunnel+HT link. As a PPU, Fusion could also shine, as Physics calculations benefit greatly from being in low-latency communication with the CPU. This is why AMD plans to have Torrenza for graphics and physics cards, among other things. As long as this IGU has a decent number of unified shaders (hopefully 16 by launch), I think it could do very well. I use integrated graphics right now, so this is an interesting development for me.


RE: Physics acceleration
By SexyK on 11/17/2006 4:56:08 PM , Rating: 2
But only a small amount of data is transferred from the CPU to the GPU on each clock cycle, at least compared to the massive amounts of texture data. There's a reason GPUs come with such large amounts of memory nowadays. This functionality may be useful for use as a PPU or GPGPU, but for pure pixel pumping, this thing will still have to go to system memory for texture data, which will be atrociously slow (at least as slow as current integrated graphics).


RE: Physics acceleration
By MonkeyPaw on 11/17/2006 6:33:16 PM , Rating: 4
Also keep in mind that consoles do this very thing. The XB360 for example, has the GPU and CPU side-by-side on the PCB, with the GPU actually connected to and controlling the memory. Consoles do rather well at some very intense applications with this simple setup. Granted, Console are not PCs, but the XB360 actually does quite a bit now in its GUI. Also consider that by 2008, we should see DDR3 enter the scene, which will undoubtedly provide yet another increase in memory bandwidth/performance.

I'm not going to claim that Fusion will outperform traditional stand-alone products, but I think the performance is going to be surprisingly good. I'll go out on a limb and say that such a solution will probably be good enough for just about everyone except the moderate to heavy gamer. It's hard to say just how good it will be until we hear specs and see products, but I can see a reasonably-spec'd IGP+CPU combo performing as well as the XB360. Imagine a notebook that is cheap, energy efficient, and powerful enough to game!


RE: Physics acceleration
By saratoga on 11/18/2006 3:22:22 AM , Rating: 1
quote:
There's more to it than just bandwidth.


Eh, not really. Bandwidth is all that matters in this case, since its what the CPU lacks most of all.

quote:
By accessing the L2 cache, the latencies will be considerably lower than by going through a NB or a PCIe tunnel+HT link.


Yeah, but thats not really useful for a GPU.

quote:
As a PPU, Fusion could also shine, as Physics calculations benefit greatly from being in low-latency communication with the CPU. This is why AMD plans to have Torrenza for graphics and physics cards, among other things.


Ok, but if you're just using the GPU as an inefficient DSP, thats something else entirely. If you just need vector throughput, why not put more SIMD/MIMD resources on the chip? After all, theres no reason current CPUs max out at 2x128 bit vector ops per clock. You could very easily add more, if there was demand for it.

Something like this would be interesting of course, but it seems odd to try and build a device thats both the CPU's GPU and it's DSP/SIMD engine. I have to question how well it would function in this hybrid role.

I think the main advantage is going to be power consumption and cost, with performance being a distant third (or more likely a disadvantage).


Highend on fusion?
By NullSubroutine on 11/17/2006 10:31:27 AM , Rating: 2
While there is limited memory bandwidth on mainboards when compared to GPU pcbs, why has there never been a socket on a mainboard that could use GDDRx on motherboards? Even if it couldnt be used as system memory wouldnt it be helpful for fusion type capabilities? Could you theoretically place a GDDRx memory something or other in the Torenzo slot?




RE: Highend on fusion?
By SexyK on 11/17/2006 11:26:19 AM , Rating: 3
I believe there are technical barriers to using any kind of DIMM for high-speed GDDR at this point. The signaling isn't clean enough to sustain the speeds GDDR is running at yet, hence why modern graphics cards don't have upgradeable ram, and, vice-versa, why mainboards don't have 70+Gb/s of memory bandwidth.


RE: Highend on fusion?
By Spoelie on 11/17/2006 11:48:20 AM , Rating: 3
To support the speeds of gddr, the chips need to be in very close proximity of the controller and the wiring needs to be very clean. There's no way of getting those speeds by using dimms or any other means, they have to be soldered right onto the pcb. Besides, the volumes of main system memory are still a lot higher than gddr.


RE: Highend on fusion?
By Khato on 11/17/2006 1:44:52 PM , Rating: 3
Theoretically you could. It'd be the same theory as the FBD, but using hypertransport as the interconnect instead. Would still be limited by hypertransport bandwidth, either 20 or 40 GB/sec depending how well it's done/how nicely split graphics memory traffic is between writes/reads. But since normal system memory bandwidth is probably only going to be around 12.8 GB/sec at the time, it would help.

Oh, and there's two reasons why graphics cards have so much higher bandwidth than main memory. One (and the reason why having it be a separate card makes sense) is that the width of the data path can be -far- larger since the graphics card PCB can be easily designed around that and it's a smaller PCB (hence making it more layers isn't -quite- as expensive.) Second, graphics memory is based on DDR3, main memory is currently DDR2, which is the primary reason for the frequency differential. (Yes, shorter trace length makes a difference, but it's minimal.)


Math performance
By ajfink on 11/17/2006 11:35:57 AM , Rating: 4
People seem to forget that GPUs are capable of putting out incredible parallel processing power (e.g., the ability to use x19xx Radeons for incredible folding). I would assume that with a discrete card in place, the integrated GPU core would become essentially a dedicated parallel processing core. It would eliminate the need for other FPUs and boost system performance considerably in some applications. This will reach every corner of the market, not just mobile systems, and it will have advantages for all of them.




for vista
By shamgar03 on 11/17/2006 10:29:43 AM , Rating: 1
I hope that these processors will allow vista to run with nice desktop effects without adverse effects on game performance. Even though its not techically rendering graphics while you in a full screen game, there is probobly still some 3D rendering overhead associated with the extra effects. This could mean no more need for chipset based video solutions. First the memory controllor, now this. Eventually we won't even need motherboards....




RE: for vista
By Lazarus Dark on 11/17/2006 11:27:30 AM , Rating: 2
Exactly! cmon people this is for vista! this is great for nongamers, vista can run with full spiffy graphics for cheaper. also will help in making htpc's and other small form factor and portable solutions like umpc and laptop to be smaller, cheaper, cooler and with less power.

But gamers: why do you need to turn it off? Just buy a second monitor for your discrete graphics and use the fusion for desktop on the other monitor. everyones different I suppose, but one monitor never seems like enough to me. (personally i prefer 3 or 4 monitors maybe moving to 5 or 6 in a couple years; but I'm probably an extreme case)


Cool
By Mazzer on 11/17/2006 10:20:44 AM , Rating: 2
Sounds really cool. I am no major tech geek so I don't know the technical side of stuff like this. I was thinking though since Sony went the whole Cell processor route trying to implement what I guess is a similar solution maybe Microsoft can snatch up this tech for the Xbox 3.




i dont get it
By thejez on 11/17/06, Rating: 0
RE: i dont get it
By Myrandex on 11/17/2006 11:05:28 AM , Rating: 2
It is only integrated video on the CPU die, discrete is still supported and recommended. This basically moves an integrated GPU from the chipset to the CPU. It should lead to greaer performance for integrated video, but again discrete video is still supported.


Good but ..
By i2mfan on 11/17/2006 11:01:57 AM , Rating: 2
It's good for the casual gamer/HTPC crowd. But they must make it possible to deactivate the graphic part so to be able to upgrade the graphic or CPU. This will keep cost down and make it more flexible for upgraders if then want to. A very basic version with no upgrade path will lower the cost more(less slot).




It would be cool if you could...
By jaybuffet on 11/17/2006 11:31:59 AM , Rating: 2
use the system normally, but if you want to upgrade you could throw in a discreet card and use the integrated as the physics processor




Advantages for Laptops
By casket on 11/17/2006 1:32:51 PM , Rating: 2
* 1 chip instead of 2.
* Smaller design. Less Labor.
* CPGPU might be a 45 nm part; since the GPU is smaller, it requires less power.
* One memory controller instead of 2.
* Low power-mode technology shared with GPU.
* You could choose the GPU memory controller, and stick in GDDR4. This might help the CPU. Also lowers the power.
GPU can share cache.




AMD getting clever every day.
By DallasTexas on 11/17/2006 6:28:14 PM , Rating: 1
I give the new AMD marketeers credit on two counts

"..ATI designed chipsets designed for AMD platforms will be branded under AMD as previously reported..."

Is that so they can hope to continue to sell graphics on Intel based systems? I doubt Intel will help get AMD graphics business just because they will not use the AMD logo. Nice try, though.

"...AMD shows Fusion details.."

So they are showing details on a product more than two years away. Is it because their current CPU products are uncompetitive with Core2 and best to deflect media attention to graphics two years away? Me thinks so.







fusion=trash
By slickr on 11/17/06, Rating: -1
RE: fusion=trash
By Woodchuck2000 on 11/17/2006 1:13:34 PM , Rating: 2
Firstly, it will require minimal changes to motherboards - what changes exactly are you referring to? All you'll need is a TDMS chip on board which could feasibly be wired into a 1x PCIe path from the northbridge to carry the signal.

Secondly, it means that for a given price and level of performance, the power requirements will be lower. Or for a given price and power requirement the performance will improve. Or for a given power requirement and level of performance, the price will drop. Pick one of the three.

This is not aimed at hardcore users. It allows low-end users to buy cheap or low-power PCs. It allows midrange users to buy cheap PCs and add discrete graphics later as necessary, possibly using the onboard graphics core for physics. High-end users can still spend $1000 every year for a new CPU and Graphics card with no problems.


RE: fusion=trash
By UNCjigga on 11/17/2006 1:32:14 PM , Rating: 2
Way to make a whole bunch of ASSumptions without qualifying any of them!

Anyways, I expect Fusion will do nicely for midrange graphics and stream computing (i.e. DC, physics etc.)--offering more speed and memory bandwidth than integrated graphics and a lower TDP than high-end. Hopefully by 2009 AMD will offer a Fusion platform with DDR3 or QDR memory (or whatever the next big memory architecture is.)

As far as upgrades go, I expect most people will who buy a Fusion-powered (hehe) PC will upgrade their video card before the CPU (since video product lifecycles are shorter) and take advantage of CrossFire. When its time to upgrade the CPU, they should have a choice between picking a Fusion CPU or a CPU-only core.


RE: fusion=trash
By ZmaxDP on 11/17/2006 1:14:41 PM , Rating: 1
"The fusion project will not work at all."
- All knowing are we?

"First of all the motherboards will have to change drasticly."
- Why? They have this thing called "integrated graphics" where the graphics core is on the MB and shares system memory. All that is different is you're placing the graphics core in the CPU rather than in it's own socket.

"Second that doesent mean faster performance or even less power requirments! Why? - beacose people can buy Intel CPU which lets say consumes same power as AMD fusion CPU but may buy graphic card from Nvidia that requires less power than the fusion GPU!"
- So you're attempting to claim that a graphics core on a CPU that shares system memory and a lot of it's processing functions with a typical CPU core(or several - by the time this comes out both AMD and Intel will be using a lot of quad-cores) is going to somehow have HIGHER power consumption than a graphics card that has no shared functionality with the CPU and it's own memory to power? If you look at the power consumption of an integrated graphics chip you'll find they all consume a lot less power than any same-generation graphics card.

"Thirdly that means upgrading will be too expensive as you practically need to pay for better Fusion cpu/gpu, meaning paying for 2 products, while the conventional way will let you upgrade what you need, either the GPU or CPU not forcing you to pay both!"
- Once again, this is not targeted at gamers or the high end graphics market. This is targeted towards the other 95% of computer users out there where AMD has the least exposure and market share. The Fusion CPU is slated to provide all the functionality needed for things like web browsing, Aero-glass, and probably video decode. If you want to play Quake4 on your 30 inch LCD then you can still get a discrete solution and the fusion core can function as something else: Pre-processor for discrete graphics, physics calculations, other parallel processing tasks like folding.

"Last but not least AMD will either have to offer 20 different Fusion products or make them really cost effective which will mean little money for AMD. the first method will be more suitable for AMD but it doesn't guarantee them anything. For example i may want to buy the best graphic card and mid-range processor, while with Fusion you won't have that choice you can only get a CPU that comes with pre-determined GPU"
- Please don't make me say this again. AMD is not trying to replace discrete graphics solutions with this product. They are trying to create a better and more efficient alternative to integrated graphics and MAYBE the low end discrete graphics segment. They won't be selling a new FX processor with a FX -level graphics core for all of the reasons you mentioned (except power consumption - just plain wrong there).

The people at AMD aren't any dumber than you are, and probably they're a little better at this whole business than you are since they've been doing it a while, and doing pretty well considering the competition. If you think to yourself: Wow, that's dumb because... They've probably already realized that and adjusted their plans accordingly a long long time ago. Seriously, if you look at all the different threads about fusion you'll notice (shockingly) that everyone is making the same negative comments about the idea - without really knowing what the idea is. If hundreds of people like us can come up with these criticisms, don't you think it is vaguely possible that someone at AMD did too? Maybe it's possible that AMD realized that there are markets where a product like this makes sense and other markets where it doesn't?

I guess not. I mean, they only designed one of the most successful performance processors that had their main competitor beat for 4 years, which is a long time in this industry. I mean, what about that accomplishment would even suggest that they have the barest of cognitive powers. And yes, this was rhetorical.

By the by, your spelling is atrocious. Firefox 2.0 has a built in spell checker and having correct spelling and grammar goes a long way to making even foolish arguments look better.


"Game reviewers fought each other to write the most glowing coverage possible for the powerhouse Sony, MS systems. Reviewers flipped coins to see who would review the Nintendo Wii. The losers got stuck with the job." -- Andy Marken

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki