backtop


Print 81 comment(s) - last by crazydrummer45.. on Jul 29 at 8:41 PM


AMD and ATI are already planning scalable designs for 2008
"Torrenza" platforms and unified GPU/CPU processors

AMD announced the $5.4B USD takeover of ATI earlier today, but the new company is already making large plans for the future.  Dave Orton, soon-to-be Executive Vice President of AMD's ATI Division, claimed that AMD and ATI would begin leveraging the sales of both companies by 2007.  However, a slide from the AMD/ATI merger documentation has already shown some interesting development plans for 2008.

Specifically, it appears as though AMD and ATI are planning unified, scalable platforms using a mixture of AMD CPUs, ATI chipsets and ATI GPUs.  This sort of multi-GPU, multi-CPU architecture is extremely reminiscent of AMD's Torrenza technology announced this past June, which allows low-latency communications between chipset, CPU and main memory. The premise for Torrenza is to open the channel for embedded chipset development from 3rd party companies. AMD said the technology is an open architecture, allowing what it called "accelerators" to be plugged into the system to perform special duties, similar to the way we have a dedicated GPU for graphics.

Furthermore, AMD President Dirk Meyer also confirmed that in addition to multi-processor platforms, stating "As we look towards ever finer manufacturing geometries we see the opportunity to integrate CPU and GPU cores together onto the same die to better serve the needs of some segments."  A clever DailyTech reader recently pointed out that AMD just recently filed its first graphics-oriented patent a few weeks ago.  The patent, titled "CPU and graphics unit with shared cache" seems to indicate that these pet projects at AMD are a little more than just pipe dreams.

During the AMD/ATI merger conference call, Meyer furthermore added that not too long ago, floating point processing was done on a separate piece of silicon.  Meyer claimed that the trend for the FPU integration into the CPU may not be too different than the evolution of the GPU into the CPU.

Bob Rivet, AMD's Chief Financial Officer, claims the combined company will save nearly $75M USD in licensing and development overlap in 2007 alone, and another $125M in 2008.  Clearly the combined development between the two companies has a few cogs in motion already.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By rrsurfer1 on 7/24/2006 9:38:01 AM , Rating: 4
*May* be???

We're talking shared, extremely fast cache. There's no better way to keep a GPU fed. The CPU will be able to closely work with the GPU on-die. There's no doubt in my mind this type of solution will not only be faster - but also much more efficient. If done correctly this could yield huge increases in performance, and decreases in overall power use. With ATI and AMD working together, this is more than possible. I can't wait.


By DallasTexas on 7/24/2006 9:55:18 AM , Rating: 2
I agree but I'll avoid saying "definitely" because discounting the discrete path is a bit premature.

My guess is that once physics acceleration takes root, the discrete graphics option will yield better results than integrated 3D graphics. Of course, physics will some day ALSO be integrated but we're talking about 2D/3D graphics in this thread - at least I was.

regards


By Merry on 7/24/2006 9:55:57 AM , Rating: 2
but then surely if you wanted to upgrade you graphics you'd need a new processor and/or motherboard?

I dont think many would be happy with that.


By rrsurfer1 on 7/24/2006 10:00:27 AM , Rating: 2
True, that is a downside. But if they can use the on-die nature of the GPU to destroy the discrete competition, not many people would have a problem with having it on-die. Especially if it takes discrete GPUs many generations to catch up. Conceivably, with low-latency, high bandwidth access to shared cache and specialized CPU-GPU interaction, you canould make a CPU/GPU that would be unmatched by anything that has to go through a bus, and the associated latency.


By Spoonbender on 7/24/2006 10:00:18 AM , Rating: 3
Except for one thing. The GPU doesn't work on ~4mb data sets. It rushes through 250+ MB of data very quickly. So sharing cache with a CPU isn't an obvious improvement. But like the article said, it'll be great for specific customers. It could make for some nice low-power laptops with decent performance.


And about the FPU's disappearing, try rereading the article, especially the bits about the Torrenza platform. Looks like the FPU might be back with a vengeance... Full circle indeed. :)

I think the same might happen with CPU's. For everyday tasks, an integrated GPU might be a great solution. Lower costs, lower power consumption, low latency on CPU/GPU traffic.
But for "serious graphics", you'll still want to plop down a dedicated chip.


By rrsurfer1 on 7/24/2006 10:06:08 AM , Rating: 2
Good point. However, It races through *high-latency* relatively low-bandwidth memory. Cache is much faster/higher bandwidth. There are optimizations you could use there that are impossible to implement with discrete solutions. But like you I agree this would probably be most applicable in the beginning, to low-power laptops.


By SexyK on 7/24/2006 10:17:09 AM , Rating: 3
I don't know why everyone is saying the latency will be lower with this dual socket setup. You're still going to need 256-512MB+ framer buffers and last time I checked, the memory integrated onto discreet graphics cards was WAY faster than the main system memory. In fact that's one of the benefits of discrete graphics, they can keep the memory near the chip and not use sockets etc, which makes routing easier and keeps the clock speeds up.... maybe they'll have a solution for this problem with this dual socket system, but I'm not holding my breath.


By rrsurfer1 on 7/24/2006 10:28:39 AM , Rating: 2
With a good integrated memory controller on-die this would cease to be a problem. If you look it up you'll find DDR2 and DDR3 have roughly comparable bandwidth. The reason is NOT because its faster than system memory, it's because its faster than going off the discrete GPU board, and through the memory controllers and system bus. With an ON-DIE (not dual socket as you stated) GPU, the memory could be shared with the system without the additional latency that discrete boards using system memory have to deal with.


By SexyK on 7/25/2006 12:26:08 AM , Rating: 2
quote:
by rrsurfer1 on July 24, 2006 at 10:28 AM

With a good integrated memory controller on-die this would cease to be a problem. If you look it up you'll find DDR2 and DDR3 have roughly comparable bandwidth. The reason is NOT because its faster than system memory, it's because its faster than going off the discrete GPU board, and through the memory controllers and system bus. With an ON-DIE (not dual socket as you stated) GPU, the memory could be shared with the system without the additional latency that discrete boards using system memory have to deal with.


Huh? I think you're confused. A 7900GTX has over 50GB/s of bandwidth between the memory and the GPU. An AM2 system even maxed out with DDR2-800 only has a theoretical max of ~12.8 GB/s of bandwidth. That is a LOT of ground to make up.


By wingless on 7/24/2006 10:44:20 AM , Rating: 2
This is a good point and Im worried about this too, but we all should know that DDR3 is on its way to the desktop in 2007 and 2008. Also having a CPU and GPU damn near plugged together like a LEGO on this Hypertransport bus may make things very fast. They may show us the coolest tech we've ever seen in 2008 and 9.


By Clauzii on 7/25/2006 5:10:39 PM , Rating: 2
:O

That was a BIG framebuffer :O

I want that 11K x 11K resolution NOW :)


By Clauzii on 7/25/2006 5:13:21 PM , Rating: 2
... as a reply to this: "You're still going to need 256-512MB+ framer buffers..."


By SexyK on 7/25/2006 9:22:08 PM , Rating: 2
quote:
:O

That was a BIG framebuffer :O

I want that 11K x 11K resolution NOW :)


With AA and AF you can fill a 256-512MB frame buffer at much lower resolutions than that.


By Clauzii on 7/26/2006 9:44:50 PM , Rating: 2
My fault :)

I was thinking 2D :(


By pnyffeler on 7/24/2006 10:06:38 AM , Rating: 3
With the advent of Windows Vista, lumping the CPU & GPU into the same memory pool will not only be feasible but also the next logical move. Before Vista, GPU's were more or less beyond the control of the OS, so in order for them to work, they needed to have their own supply of memory that they controlled themselves. That was either in the form of on-card memory or shared memory for built-in GPU's. As everyone knows, shared memory sucks because the bandwidth is too small.

Now enter Vista. The OS can now manage the GPU as it does for the CPU. That also means that it can regulate the memory allocated to the GPU, and having separate memory supplies for the CPU and GPU becomes wasteful. Currently, if the GPU isn't active, the CPU can't use the GPU's unused memory space, and vice versa. By giving the two processors access to the same memory, you can allocate memory use as needed to either, or, even cooler, you can point the GPU to directly read information that the CPU has just written.

Finally, with Vista being a 64-bit OS, you've eliminated the 4 GB memory limit, making it possible to stuff you're rig with RAM. With 8 GB of RAM, you could have 3-4 GB allocated to your game of choice, 2 GB of the RAM allocated to the GPU to make it look really pretty, and still have enough RAM left over to keep all of your other programs happy.

Better start saving your allowances now....


By rrsurfer1 on 7/24/2006 10:11:07 AM , Rating: 2
Real good point.


By piraxha on 7/24/2006 12:54:13 PM , Rating: 2
The merging of CPUs and GPUs has already started, at VIA:

http://www.viaarena.com/default.aspx?PageID=5&Arti...

"To achieve this, VIA’s hardware strategy involves the explicit design of more performance per watt at the silicon level and more features per square inch at the platform level. To demonstrate this, Wenchi showed the fourth generation VIA processor named John. John features the CPU, chipset and graphics processor in the one package."

It should make for some interesting competition.


By Knish on 7/24/2006 6:41:29 PM , Rating: 2
quote:
The merging of CPUs and GPUs has already started, at VIA:

Sorry, I like my processors good.


By Targon on 7/24/2006 9:16:10 PM , Rating: 1
The bandwidth issue could easily be solved by having the graphics card be an HTX(HyperTransport) slot based instead of PCI Express. With dedicated memory slots that are directly connected to the HTX slot, the video card could talk directly to this special bank of memory and the latency issue becomes almost non-existant.


By Tyler 86 on 7/26/2006 11:49:30 PM , Rating: 2
I believe Targon hit the most obvious solution.

AMD has recently opened up their HTX specs to allow for drop-in coprocessors in their Multi-CPU boards.

Now they might be pushing 2 sockets, or even 4 sockets, to the desktop segment.

Perhaps when you go for your next upgrade, you'll have a choice of "Do I want more CPUs, or more GPUs?"


By jonobp1 on 7/24/2006 11:05:42 AM , Rating: 2
Remember months back when AMD licensed Z-RAM technology to research using in their processor cache? Atleast 5 times the cache density we have now. So if Intel is cramming 24mb+ of cache on 65nm parts couldn't we assume that in perhaps 3 or so years when AMD/ATI start really putting things on the core that we'll have 100mb+ caches on 45nm parts? Besides the fact that they may be almost no latency with on on-die approach this would perform even better than something through hypertransport which certainly would work better than pci-e. I can see ATI focusing on one on-die solution and it's performance would be determined by the amount of cache on the chip. So instead of 50 different forms of an R520 core we'd have cheaper and more expensive cpu's determining your potential graphics workload.


By Randalllind on 7/24/2006 5:19:03 PM , Rating: 2
So what do they want from us? Buy motherboard with intergrated GPU then 4gb of ram on board then allocate half of it to video?

On board video will never over take a single video card. But, who knows if they make it where you can put 512mb to 1 gb for video card and leave another gb or so of memory for the pc it may work great.


By Eris23007 on 7/24/2006 8:33:51 PM , Rating: 1

Don't be so sure you can predict future trends. There are a number of factors that could lead to CPU-GPU integration enjoying a huge advantage. For example, a second memory controller with a separate physical path to separate physical sticks of memory, which might very well be GDDR4 or somesuch. Since AMTDI (ATMID?) makes the chipsets, GPUs, and CPUs, they could very well create such a product.

"Nobody will ever need more than 640K of RAM".

I rest my case.


By oTAL on 7/25/2006 10:48:47 AM , Rating: 2
That constant missquote is starting to get on my nerve....
If you wanna quote someone, please do it right...


By oTAL on 7/25/2006 11:13:50 AM , Rating: 2
Here's a nice quote by Bill Gates, for all those people who hammer on intellectual property theft:

"Stolen's a strong word. It's copyrighted content that the owner wasn't paid for."
Source: Bill Gates on ...the Competition, Wall Street Journal, 2006-06-19

He is a very intelligent man. Don't attribute stupid quotes to him without at least doing a google search.


By blazeoptimus on 7/24/2006 11:51:04 AM , Rating: 3
I think we've only begun to explore the possibilities here. If were going for small dedicated cpus (seems to be where this is headed), then only the general purpose business machines will have CPU's with integrated graphics. The higher end equipment may end up with a hugely socketed board, some used for CPU style tasks, some used for GPU style tasks, some used for physics, etc... The idea is that with AMD's Hypertransport and CPU's, and ATI's GPU and chipset tech, It lays the landscape open for a completely new way of building the modern PC. Something much more configurable and modular. Want more CPU power, swap out a graphics chip and put in a CPU, want more gaming power, put the extra graphics chip back in and so on.


"It seems as though my state-funded math degree has failed me. Let the lashings commence." -- DailyTech Editor-in-Chief Kristopher Kubicki

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki