backtop


Print 37 comment(s) - last by Trisped.. on Nov 21 at 3:42 PM

CPU and GPU all in one to deliver the best performance-per-watt-per-dollar

AMD today during its analyst’s day conference call unveiled more details of its next-generation Fusion CPU and GPU hybrid. Early mentions of Fusion first appeared shortly after AMD’s acquisition of ATI Technologies was completed a few months ago. AMD is expected to debut its first Fusion processor in the late 2008 to early 2009 timeframe.

AMD claims: “Fusion-based processors will be designed to provide step-function increases in performance-per-watt-per-dollar over today’s CPU-only architectures, and provide the best customer experience in a world increasingly reliant upon 3D graphics, digital media and high performance computing.”

The GPU and CPU appear to be separate cores on a single die according to early diagrams of AMD’s Fusion architecture. CPU functionality will have access to its own cache while GPU functionality will have access to its own buffers. Joining together the CPU and GPU is a crossbar and integrated memory controller. Everything is connected via HyperTransport links. From there the Fusion processor will have direct access to system memory that appears to be shared between the CPU and GPU. It doesn’t appear the graphics functionality will have its own frame buffer.

While Fusion is a hybrid CPU and GPU architecture, AMD will continue to produce discrete graphics solutions. AMD still believes there’s a need for discrete graphics cards for high end users and physics processing.

Also mentioned during the conference call is AMD’s new branding scheme regarding ATI products. Under the new branding scheme, chipsets for Intel processors and graphics cards will continue on with the ATI brand name. ATI designed chipsets designed for AMD platforms will be branded under AMD as previously reported.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Highend on fusion?
By NullSubroutine on 11/17/2006 10:31:27 AM , Rating: 2
While there is limited memory bandwidth on mainboards when compared to GPU pcbs, why has there never been a socket on a mainboard that could use GDDRx on motherboards? Even if it couldnt be used as system memory wouldnt it be helpful for fusion type capabilities? Could you theoretically place a GDDRx memory something or other in the Torenzo slot?




RE: Highend on fusion?
By SexyK on 11/17/2006 11:26:19 AM , Rating: 3
I believe there are technical barriers to using any kind of DIMM for high-speed GDDR at this point. The signaling isn't clean enough to sustain the speeds GDDR is running at yet, hence why modern graphics cards don't have upgradeable ram, and, vice-versa, why mainboards don't have 70+Gb/s of memory bandwidth.


RE: Highend on fusion?
By Spoelie on 11/17/2006 11:48:20 AM , Rating: 3
To support the speeds of gddr, the chips need to be in very close proximity of the controller and the wiring needs to be very clean. There's no way of getting those speeds by using dimms or any other means, they have to be soldered right onto the pcb. Besides, the volumes of main system memory are still a lot higher than gddr.


RE: Highend on fusion?
By Khato on 11/17/2006 1:44:52 PM , Rating: 3
Theoretically you could. It'd be the same theory as the FBD, but using hypertransport as the interconnect instead. Would still be limited by hypertransport bandwidth, either 20 or 40 GB/sec depending how well it's done/how nicely split graphics memory traffic is between writes/reads. But since normal system memory bandwidth is probably only going to be around 12.8 GB/sec at the time, it would help.

Oh, and there's two reasons why graphics cards have so much higher bandwidth than main memory. One (and the reason why having it be a separate card makes sense) is that the width of the data path can be -far- larger since the graphics card PCB can be easily designed around that and it's a smaller PCB (hence making it more layers isn't -quite- as expensive.) Second, graphics memory is based on DDR3, main memory is currently DDR2, which is the primary reason for the frequency differential. (Yes, shorter trace length makes a difference, but it's minimal.)


"If you mod me down, I will become more insightful than you can possibly imagine." -- Slashdot

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki