Print 50 comment(s) - last by mmarq.. on Jul 20 at 3:34 PM

AMD's Giuseppe Amato dispels rumors and misinterpreted statements of "Fusion," GPGPU and the company

In an interview with Italian media, AMD Executive Giuseppe Amato, Technical Director of Sales & Marketing EMEA, discussed AMD's current market position and future products.

In the interview, Amato shed more light on the structure of AMD's upcoming Fusion processors. A misconception that Amato noted is that Fusion processors will not only be available in single-chip flavors, but also multi-chip formats. Two Fusion processors linked together would allow for parallel GPUs. He said that AMD has still not solidified the future plans of Fusion yet, but indicated it would be very likely to see a Fusion processor with a GPU and CPU connected through a CrossFire-like interface -- and have a total TDP of less than 120 Watts.

Amato also praised the flexibility of the Fusion processor in the interview and told Hardware Upgrade that it will allow AMD to "integrate a specific number of GPU and CPU cores depending on the customer and the uses for which they will use the chip." 

"AMD isn't just a microprocessor company anymore", he stated. After the acquisition of ATI, "AMD changed from a processor company to a platform company." This is where Fusion ties in. Its high grade of flexibility will combine GPUs and CPUs into one product. Amato believes that Fusion platforms will be able to specifically match the needs of its customers.

AMD's Fusion processors will also be closely tied to GPGPU. Using a GPGPU platform based on Fusion, AMD will be able to offer HPC systems that can do all kinds of work. Code that is more suited for CPUs will be executed on the CPU part of the Fusion processor, while code more efficiently run on a GPGPU will be run on the GPU portion of the processor. To sum it up, AMD's Fusion processors will be able to do a variety of work, allowing them to better meet the needs of AMD's customers.

Amato also dispelled rumors that AMD will be going completely fabless. He blames the source of the rumor as a misinterpretation of a speech Hector Ruiz gave. However, AMD plans to stick to a fab-less manufacturing model for GPU and chipset products.

The full interview can be viewed at Hardware Upgrade.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: sounds good
By mmarq on 7/17/2007 7:00:49 PM , Rating: 3
but i wonder, will the chip be integrated into the board, or will it be a slot keeping it upgradeable

In the first implementation it will be like today CPU + PCIe GPU, more a (hardware circuit + software tiny layer) called CTM, for application specific task stream acceleration.

Reading the interview Amato said they were able to accelerate a virus scanner to performance not possible with only the CPU. It sure 'can' be done with * the majority * of applications, if software developers so program them, because they all could and surely will have stream specific tasks except the pure Integer ones.

An example that we have shown that uses our previous video architectures is Tarari. By recompiling its antivirus scanner and using an AMD GPU, Tarari was able to reach significantly higher performance compared to what would have been obtainable using only a CPU

One honest comment i have is that thank god we don't have to go up trough to SSE50 anymore, because it will be useless, and not helping software developers because right now they are only passing from SSE2.

Would be a revolution if for having the best performance we wouldn't need to have the best CPU anymore. At least at streaming and FP.

Meaning has more applications go for CTM more irrelevant is witch CPU, Intel or AMD, is a little more general purpose performant than the other.

I think a core/gpu configuration thats upgradeable would be great.. although gone would be the days of upgrading part by part haha.

No.You would still be able to upgrade part by part

A large error that has been made regarding Fusion is that people are thinking that this type of architecture will only be a single chip package architecture, meaning both the CPU(!) and GPU are to be integrated on the same die.

A) - General stream will come to the GPU or GPGPU by CTM (hardware circuit + software tiny layer) and or other schemes. Here we already have propositions from ATI and Nvidia(tesla) of GPGPU capables of more general computing tasks, like physics acceleration and other tasks.

ATI is more advanced in the game, but Nvidia has also implement a similar thing, and it will be possible to use Intel CPUs with it, either ATI or Nvidia.

B) - Streaming will come to the CPU by means:
__ B1) Making functional units inside the CPU capable of it, as is the case of today SSE units in all CPUs(more or less), only requiring additional logic and a software layer for load balancing with Stream units outside of a particular CPU die, and or for general application stream task acceleration, like in CTM or the the almost only software layer Intel derivative from the project Larabee.

Those functional units can possibly(?) have also more GPU centric tasks like vertex and shading processing, turning the CPU in a GPU like. Only AMD at the moment seems to me, to have any intention at this in a shorter period, because of the advantage of better 'clustering' of their CPU designs(?). It will happen?... we have to see.

Thanks to the availablilty of a higher number of 'registries' General purpose GPU computing will also be made much easier in 2009 when Microsoft releases the DirectX 11 API.
(in that case CPU registries also IMO, because x86_64 can have 32 GPR, double of now, without breaking applications(?))

__ B2) Having inside the CPU die, *units separated from the traditional cores*, like other cores, connected through a Xbar, other link or sharing a L2/L3, and capables of streaming, like is the case of the IBM Cell processor implementation in the PS3, and the derivative in the Xbox.

But for load balancing with Stream units outside of a particular CPU die, and or for general application stream task acceleration, it would still be needed something like CTM or the almost only software layer Intel 'Larabee' derivative.

Again those separated units can surely have also more GPU centric tasks like vertex and shading processing, turning the CPU in a GPU like.

__ B3) Having inside the CPU package in a MCM configuration, like is the case of the C2Quad, a traditional CPU die and a GPU die. In this example a C2Quad(?) would be a C2 core + a GPU core. In here the CPU die and GPU die communicate trough a link or by sharing a L2/L3.

Again for load balancing with Stream units outside of a particular CPU MCM package and or for general application stream task acceleration, it would still be needed something like CTM or the almost only software layer Intel 'Larabee' derivative.

AMD now possesses all of the technologies it needs to develop Fusion architectures. Whether it is a native solution with serveral cores integrated in the same die, similar to what we are using for Barcelona, or a multi-die package (author’s note: the same architecture used by Intel for its Core 2 Quad chips) composed of two separate silicon die installed on the same package, AMD is open to all technological evolutions that the market requires.


This hardware circuit + software tiny layer is imperative, be it CTM, which seems to me to became the central part of Crossfire 2 for ATI, or Larabee which relays much more on software.

All those GPGPU and CPU/GPUs can be connected all together very effectively, trough a cache coherent protocol, used in mainframes multiparallel machines and clusters, of which IBM, AMD and others have very solid implementations, and which Intel will have when CSI arrives, if it has and how, cache coherency. That would make CPU importance even more irrelevant for the large group of straming capable applications.

More connected to 'us', in the enthusiast market, it is precisely there where AMD with HT3, DC 2, HTX slots like in the next RD790, and the ones after that
will absolutely rock!!...

I'm not payed to post, nor a fanboy, nor in defense of anyone, but it seems to me that AMD has a clear advantage, and if CSI don't came out relatively soon, Intel will be in trouble if it stays sticked to the same old shared FSB.

"And boy have we patented it!" -- Steve Jobs, Macworld 2007

Most Popular ArticlesSmartphone Screen Protectors – What To Look For
September 21, 2016, 9:33 AM
UN Meeting to Tackle Antimicrobial Resistance
September 21, 2016, 9:52 AM
Walmart may get "Robot Shopping Carts?"
September 17, 2016, 6:01 AM
5 Cases for iPhone 7 and 7 iPhone Plus
September 18, 2016, 10:08 AM
Update: Problem-Free Galaxy Note7s CPSC Approved
September 22, 2016, 5:30 AM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki