backtop


Print 81 comment(s) - last by crazydrummer45.. on Jul 29 at 8:41 PM


AMD and ATI are already planning scalable designs for 2008
"Torrenza" platforms and unified GPU/CPU processors

AMD announced the $5.4B USD takeover of ATI earlier today, but the new company is already making large plans for the future.  Dave Orton, soon-to-be Executive Vice President of AMD's ATI Division, claimed that AMD and ATI would begin leveraging the sales of both companies by 2007.  However, a slide from the AMD/ATI merger documentation has already shown some interesting development plans for 2008.

Specifically, it appears as though AMD and ATI are planning unified, scalable platforms using a mixture of AMD CPUs, ATI chipsets and ATI GPUs.  This sort of multi-GPU, multi-CPU architecture is extremely reminiscent of AMD's Torrenza technology announced this past June, which allows low-latency communications between chipset, CPU and main memory. The premise for Torrenza is to open the channel for embedded chipset development from 3rd party companies. AMD said the technology is an open architecture, allowing what it called "accelerators" to be plugged into the system to perform special duties, similar to the way we have a dedicated GPU for graphics.

Furthermore, AMD President Dirk Meyer also confirmed that in addition to multi-processor platforms, stating "As we look towards ever finer manufacturing geometries we see the opportunity to integrate CPU and GPU cores together onto the same die to better serve the needs of some segments."  A clever DailyTech reader recently pointed out that AMD just recently filed its first graphics-oriented patent a few weeks ago.  The patent, titled "CPU and graphics unit with shared cache" seems to indicate that these pet projects at AMD are a little more than just pipe dreams.

During the AMD/ATI merger conference call, Meyer furthermore added that not too long ago, floating point processing was done on a separate piece of silicon.  Meyer claimed that the trend for the FPU integration into the CPU may not be too different than the evolution of the GPU into the CPU.

Bob Rivet, AMD's Chief Financial Officer, claims the combined company will save nearly $75M USD in licensing and development overlap in 2007 alone, and another $125M in 2008.  Clearly the combined development between the two companies has a few cogs in motion already.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By rrsurfer1 on 7/24/2006 9:38:01 AM , Rating: 4
*May* be???

We're talking shared, extremely fast cache. There's no better way to keep a GPU fed. The CPU will be able to closely work with the GPU on-die. There's no doubt in my mind this type of solution will not only be faster - but also much more efficient. If done correctly this could yield huge increases in performance, and decreases in overall power use. With ATI and AMD working together, this is more than possible. I can't wait.


By DallasTexas on 7/24/2006 9:55:18 AM , Rating: 2
I agree but I'll avoid saying "definitely" because discounting the discrete path is a bit premature.

My guess is that once physics acceleration takes root, the discrete graphics option will yield better results than integrated 3D graphics. Of course, physics will some day ALSO be integrated but we're talking about 2D/3D graphics in this thread - at least I was.

regards


By Merry on 7/24/2006 9:55:57 AM , Rating: 2
but then surely if you wanted to upgrade you graphics you'd need a new processor and/or motherboard?

I dont think many would be happy with that.


By rrsurfer1 on 7/24/2006 10:00:27 AM , Rating: 2
True, that is a downside. But if they can use the on-die nature of the GPU to destroy the discrete competition, not many people would have a problem with having it on-die. Especially if it takes discrete GPUs many generations to catch up. Conceivably, with low-latency, high bandwidth access to shared cache and specialized CPU-GPU interaction, you canould make a CPU/GPU that would be unmatched by anything that has to go through a bus, and the associated latency.


By Spoonbender on 7/24/2006 10:00:18 AM , Rating: 3
Except for one thing. The GPU doesn't work on ~4mb data sets. It rushes through 250+ MB of data very quickly. So sharing cache with a CPU isn't an obvious improvement. But like the article said, it'll be great for specific customers. It could make for some nice low-power laptops with decent performance.


And about the FPU's disappearing, try rereading the article, especially the bits about the Torrenza platform. Looks like the FPU might be back with a vengeance... Full circle indeed. :)

I think the same might happen with CPU's. For everyday tasks, an integrated GPU might be a great solution. Lower costs, lower power consumption, low latency on CPU/GPU traffic.
But for "serious graphics", you'll still want to plop down a dedicated chip.


By rrsurfer1 on 7/24/2006 10:06:08 AM , Rating: 2
Good point. However, It races through *high-latency* relatively low-bandwidth memory. Cache is much faster/higher bandwidth. There are optimizations you could use there that are impossible to implement with discrete solutions. But like you I agree this would probably be most applicable in the beginning, to low-power laptops.


By SexyK on 7/24/2006 10:17:09 AM , Rating: 3
I don't know why everyone is saying the latency will be lower with this dual socket setup. You're still going to need 256-512MB+ framer buffers and last time I checked, the memory integrated onto discreet graphics cards was WAY faster than the main system memory. In fact that's one of the benefits of discrete graphics, they can keep the memory near the chip and not use sockets etc, which makes routing easier and keeps the clock speeds up.... maybe they'll have a solution for this problem with this dual socket system, but I'm not holding my breath.


By rrsurfer1 on 7/24/2006 10:28:39 AM , Rating: 2
With a good integrated memory controller on-die this would cease to be a problem. If you look it up you'll find DDR2 and DDR3 have roughly comparable bandwidth. The reason is NOT because its faster than system memory, it's because its faster than going off the discrete GPU board, and through the memory controllers and system bus. With an ON-DIE (not dual socket as you stated) GPU, the memory could be shared with the system without the additional latency that discrete boards using system memory have to deal with.


By SexyK on 7/25/2006 12:26:08 AM , Rating: 2
quote:
by rrsurfer1 on July 24, 2006 at 10:28 AM

With a good integrated memory controller on-die this would cease to be a problem. If you look it up you'll find DDR2 and DDR3 have roughly comparable bandwidth. The reason is NOT because its faster than system memory, it's because its faster than going off the discrete GPU board, and through the memory controllers and system bus. With an ON-DIE (not dual socket as you stated) GPU, the memory could be shared with the system without the additional latency that discrete boards using system memory have to deal with.


Huh? I think you're confused. A 7900GTX has over 50GB/s of bandwidth between the memory and the GPU. An AM2 system even maxed out with DDR2-800 only has a theoretical max of ~12.8 GB/s of bandwidth. That is a LOT of ground to make up.


By wingless on 7/24/2006 10:44:20 AM , Rating: 2
This is a good point and Im worried about this too, but we all should know that DDR3 is on its way to the desktop in 2007 and 2008. Also having a CPU and GPU damn near plugged together like a LEGO on this Hypertransport bus may make things very fast. They may show us the coolest tech we've ever seen in 2008 and 9.


By Clauzii on 7/25/2006 5:10:39 PM , Rating: 2
:O

That was a BIG framebuffer :O

I want that 11K x 11K resolution NOW :)


By Clauzii on 7/25/2006 5:13:21 PM , Rating: 2
... as a reply to this: "You're still going to need 256-512MB+ framer buffers..."


By SexyK on 7/25/2006 9:22:08 PM , Rating: 2
quote:
:O

That was a BIG framebuffer :O

I want that 11K x 11K resolution NOW :)


With AA and AF you can fill a 256-512MB frame buffer at much lower resolutions than that.


By Clauzii on 7/26/2006 9:44:50 PM , Rating: 2
My fault :)

I was thinking 2D :(


By pnyffeler on 7/24/2006 10:06:38 AM , Rating: 3
With the advent of Windows Vista, lumping the CPU & GPU into the same memory pool will not only be feasible but also the next logical move. Before Vista, GPU's were more or less beyond the control of the OS, so in order for them to work, they needed to have their own supply of memory that they controlled themselves. That was either in the form of on-card memory or shared memory for built-in GPU's. As everyone knows, shared memory sucks because the bandwidth is too small.

Now enter Vista. The OS can now manage the GPU as it does for the CPU. That also means that it can regulate the memory allocated to the GPU, and having separate memory supplies for the CPU and GPU becomes wasteful. Currently, if the GPU isn't active, the CPU can't use the GPU's unused memory space, and vice versa. By giving the two processors access to the same memory, you can allocate memory use as needed to either, or, even cooler, you can point the GPU to directly read information that the CPU has just written.

Finally, with Vista being a 64-bit OS, you've eliminated the 4 GB memory limit, making it possible to stuff you're rig with RAM. With 8 GB of RAM, you could have 3-4 GB allocated to your game of choice, 2 GB of the RAM allocated to the GPU to make it look really pretty, and still have enough RAM left over to keep all of your other programs happy.

Better start saving your allowances now....


By rrsurfer1 on 7/24/2006 10:11:07 AM , Rating: 2
Real good point.


By piraxha on 7/24/2006 12:54:13 PM , Rating: 2
The merging of CPUs and GPUs has already started, at VIA:

http://www.viaarena.com/default.aspx?PageID=5&Arti...

"To achieve this, VIA’s hardware strategy involves the explicit design of more performance per watt at the silicon level and more features per square inch at the platform level. To demonstrate this, Wenchi showed the fourth generation VIA processor named John. John features the CPU, chipset and graphics processor in the one package."

It should make for some interesting competition.


By Knish on 7/24/2006 6:41:29 PM , Rating: 2
quote:
The merging of CPUs and GPUs has already started, at VIA:

Sorry, I like my processors good.


By Targon on 7/24/2006 9:16:10 PM , Rating: 1
The bandwidth issue could easily be solved by having the graphics card be an HTX(HyperTransport) slot based instead of PCI Express. With dedicated memory slots that are directly connected to the HTX slot, the video card could talk directly to this special bank of memory and the latency issue becomes almost non-existant.


By Tyler 86 on 7/26/2006 11:49:30 PM , Rating: 2
I believe Targon hit the most obvious solution.

AMD has recently opened up their HTX specs to allow for drop-in coprocessors in their Multi-CPU boards.

Now they might be pushing 2 sockets, or even 4 sockets, to the desktop segment.

Perhaps when you go for your next upgrade, you'll have a choice of "Do I want more CPUs, or more GPUs?"


By jonobp1 on 7/24/2006 11:05:42 AM , Rating: 2
Remember months back when AMD licensed Z-RAM technology to research using in their processor cache? Atleast 5 times the cache density we have now. So if Intel is cramming 24mb+ of cache on 65nm parts couldn't we assume that in perhaps 3 or so years when AMD/ATI start really putting things on the core that we'll have 100mb+ caches on 45nm parts? Besides the fact that they may be almost no latency with on on-die approach this would perform even better than something through hypertransport which certainly would work better than pci-e. I can see ATI focusing on one on-die solution and it's performance would be determined by the amount of cache on the chip. So instead of 50 different forms of an R520 core we'd have cheaper and more expensive cpu's determining your potential graphics workload.


By Randalllind on 7/24/2006 5:19:03 PM , Rating: 2
So what do they want from us? Buy motherboard with intergrated GPU then 4gb of ram on board then allocate half of it to video?

On board video will never over take a single video card. But, who knows if they make it where you can put 512mb to 1 gb for video card and leave another gb or so of memory for the pc it may work great.


By Eris23007 on 7/24/2006 8:33:51 PM , Rating: 1

Don't be so sure you can predict future trends. There are a number of factors that could lead to CPU-GPU integration enjoying a huge advantage. For example, a second memory controller with a separate physical path to separate physical sticks of memory, which might very well be GDDR4 or somesuch. Since AMTDI (ATMID?) makes the chipsets, GPUs, and CPUs, they could very well create such a product.

"Nobody will ever need more than 640K of RAM".

I rest my case.


By oTAL on 7/25/2006 10:48:47 AM , Rating: 2
That constant missquote is starting to get on my nerve....
If you wanna quote someone, please do it right...


By oTAL on 7/25/2006 11:13:50 AM , Rating: 2
Here's a nice quote by Bill Gates, for all those people who hammer on intellectual property theft:

"Stolen's a strong word. It's copyrighted content that the owner wasn't paid for."
Source: Bill Gates on ...the Competition, Wall Street Journal, 2006-06-19

He is a very intelligent man. Don't attribute stupid quotes to him without at least doing a google search.


By blazeoptimus on 7/24/2006 11:51:04 AM , Rating: 3
I think we've only begun to explore the possibilities here. If were going for small dedicated cpus (seems to be where this is headed), then only the general purpose business machines will have CPU's with integrated graphics. The higher end equipment may end up with a hugely socketed board, some used for CPU style tasks, some used for GPU style tasks, some used for physics, etc... The idea is that with AMD's Hypertransport and CPU's, and ATI's GPU and chipset tech, It lays the landscape open for a completely new way of building the modern PC. Something much more configurable and modular. Want more CPU power, swap out a graphics chip and put in a CPU, want more gaming power, put the extra graphics chip back in and so on.


Makes room for New Memory Tech
By rupaniii on 7/24/2006 8:52:35 AM , Rating: 2
Well, if there is to be any SERIOUS integrated GPU technology, you might as well feed the whole platform very fast memory. Perhaps if GDDR4 could be made available as a large pool of unified memory everything could be hyper fast. Who knows, as long as it all doesn't go all awry. Does INTEL have the $$$ or Interest to buy NVidia?




RE: Makes room for New Memory Tech
By phatboye on 7/24/2006 8:59:56 AM , Rating: 2
If intel were to even think about buying nvidia the government would scream bloody murder.


RE: Makes room for New Memory Tech
By tuteja1986 on 7/24/2006 9:42:49 AM , Rating: 2
Intel has nothing to gain if they buy Nvidia because Nvidia doesn't bring crapall that Intel doesn't have. Buying Nvida would be a more of hassel and expensive. They have already started development on a mini core and amd knows that if they don't start this development , they are screwed in 5 years time. I learn why AMD bought ATI so quickly :( for survival.


RE: Makes room for New Memory Tech
By Griswold on 7/24/2006 12:25:13 PM , Rating: 2
Nvidia has something Intel doesnt: the know-how to make high performance GPUs. But that is by no means enough incentive to get in the ring with the FTC about buying the no. 2 on the market.

And about your other assumption as to why AMD took this step - thats not really plausible.


RE: Makes room for New Memory Tech
By akugami on 7/24/2006 12:43:01 PM , Rating: 3
You're somehow implying Intel wouldn't be able to hire the engineers needed to make a high performance GPU. The reason Intel is unlikely to be interested in the higher end GPU arena is due to the highly competitive nature of this niche market. Intel vastly outsells nVidia and ATI in overall GPU sales, granted it's all integrated chipsets. The majority of GPU sales are in the low and mid range. It would be completely naive to think that Intel can't ramp up their current GPU's from being just low end to being low to mid end.


RE: Makes room for New Memory Tech
By Samus on 7/24/2006 2:22:09 PM , Rating: 2
precisely. nvidia's revenues are pennies on the dollar to intel's processor, chipset and flash divisions.

there is no real profit in gpu development for intel...besides, the gma950 is actually a decent onboard processor, about x550 class, and runs windows vista aeroglass.


RE: Makes room for New Memory Tech
By Tyler 86 on 7/26/2006 11:40:51 PM , Rating: 2
'x550' class?
You must be huffing the rapidly incinerating substrate off of a SERIOUSLY overclocked GMA 950.

You comparing it to ATi's x550 .. ?

http://www.anandtech.com/video/showdoc.aspx?i=2427...

It's rivaled by ATi's Xpress IGP & X300 IGP low end video, which is also much cheaper, I might add.
800x600 HQ noAA/AF @ 14fps in Doom 3...
ATi's X550 pulls about 15 fps at 1280x1024 4xAA noAF
~25fps @ 1024x768 4xAA ...


There's no way it can hold a candle to it.


RE: Makes room for New Memory Tech
By hstewarth on 7/24/2006 3:39:45 PM , Rating: 2
Actually this makes case for Intel monopoly a less of a case. Because AMD and Intel on same company, it means both Intel and nVidia are could likely be cut out ATI and AMD products respectfully.

It is better off the CPU companies are seperated from GPU because it gives move user choices.

I think in the long term this is going lead to doom for AMD/ATI. Or complete seperation of markets between the two.


By hstewarth on 7/24/2006 3:41:07 PM , Rating: 2
Opps first line should be ATI and Intel on same company, it would be a real Intel Monopoly if Intel and AMD were on same company. :)


RE: Makes room for New Memory Tech
By OrSin on 7/24/2006 9:00:12 AM , Rating: 4
Intel don't want NV. Why would they? Intel chipsets and grpahic chips out sell them 4:1. AMD on the other hand has no chip ot graphics division and must really on third party vendors like Nv, ATI, SIS. I'm still wondering why AMD bought ATI over NV. In truth I thought AMD should ahve bought out ULI ( I think that the name) before Nv did.
They were dirt and had some very good chipsets.

On thing that will be nice is that ATI should be much better osuth bridge out of the deal. With this merger I see a slotted GPU in the new 4v4 platform. That could be nice of the intergrated market.


RE: Makes room for New Memory Tech
By DallasTexas on 7/24/06, Rating: 0
RE: Makes room for New Memory Tech
By shadowzz on 7/24/2006 9:11:56 AM , Rating: 1
I think Intel should pick up nVidia for the excellent management staff, if nothing else!

http://dailytech.com/article.aspx?newsid=2424

LOL


RE: Makes room for New Memory Tech
By DallasTexas on 7/24/2006 9:13:44 AM , Rating: 2
Good point.

Intel needs to replace the 1,000 managers they laid off last week. Brilliant idea.


RE: Makes room for New Memory Tech
By shadowzz on 7/24/2006 9:17:42 AM , Rating: 2
I think I'd rather have no one work than have BDR work at my company.


RE: Makes room for New Memory Tech
By Griswold on 7/24/2006 12:21:07 PM , Rating: 2
quote:
Does INTEL have the $$$ or Interest to buy NVidia?


Three letters: FTC


RE: Makes room for New Memory Tech
By TomZ on 7/24/2006 2:35:59 PM , Rating: 2
Why?


RE: Makes room for New Memory Tech
By Griswold on 7/25/2006 4:45:40 AM , Rating: 1
You think they will just let the No. 1 buy the No. 2? Wake up tomz..


RE: Makes room for New Memory Tech
By bob661 on 7/25/2006 3:50:01 PM , Rating: 2
quote:
Does INTEL have the $$$ or Interest to buy NVidia?
Intel doesn't need to buy Nvidia. Intel already has their own fabs and makes their own chipsets and GPU's. Besides, Intel has the majority market share for both chipsets and GPU's. Why would they need Nvidia or ATI or anyone else for that matter?


RE: Makes room for New Memory Tech
By Tyler 86 on 7/26/2006 11:43:52 PM , Rating: 2
They don't need them, but if they want to offer a viable graphics solution, they probably can't do it without help.

I don't think Intel will buy nVidia.


By archcommus on 7/24/2006 10:23:55 AM , Rating: 2
This seems to parallel the move to multi-core CPUs. Have a CPU with, say, 4 cores, 2 dedicated to everyday processing and 2 dedicated to graphics functions, or even 8 cores with 2 for everyday processing, 2 for graphics, then physics, audio, and chipset functions. Basically one chip that does everything, plugged into one socket, with 8+ GB of RAM shared between everything, with only one or two expansion slots in case you want to add something small.

Does this sound feasible at all? This was just the first thing that came to mind for me.




By rrsurfer1 on 7/24/2006 10:35:32 AM , Rating: 2
Sorta. There are including more cores on CPUs now, but they are mostly the same, the GPU core would be much different. GPU's are by nature massively parallel. CPU's are going in the direction of being more parallel but massive parallelization != better performance in the CPU arena like it does with a GPU. So you'll still see a very specialized GPU when it is on-die. But functionally I guess you could say it's just another core.


By Acanthus on 7/25/2006 4:27:40 AM , Rating: 2
What youre describing is called "system on a chip" and the concept has been around for a very long time.

It will becoming feasible with 45nm.


By archcommus on 7/25/2006 9:08:02 AM , Rating: 2
Is this what the industry is moving towards, though? Multi-core processors that do everything? Is this different from "unified architectures"?


By Tyler 86 on 7/26/2006 11:56:18 PM , Rating: 2
The embedded market is (and has) always (been) moving towards system-on-a-chip - cellphones, IPods, etc...

Unified architecture just means more abstractly modular and upgradable... eg; Faster RAM for your CPU means faster RAM for your GPU, GPUs can perform 'general purpose' CPU operations (termed 'GPGPU'), CPUs can perform 'general purpose' GPU operations (probably not gonna happen, but it's in the same theme of 'unified architecture'), or CPUs that are GPUs... heh...


By Tyler 86 on 7/26/2006 11:58:56 PM , Rating: 2
So, "Does this go hand-in-hand at all with CPUs moving to multi-core?"

Yes, and no, not exactly.

It's just a development of the organic supply and demand of the marketplace. It just so happens seperate cores on the same silicon complement the development of integrating a GPU into a CPU.


FPU Comment WASN"T ORTON. It was Meyer
By geo1 on 7/24/2006 11:54:50 PM , Rating: 2

Kristopher--

Suggest you listen again at 1:06 to 1:08:30 ago. The FPU heresy (well, I'm a high-end graphics guy!) did not come from Orton's mouth. It came from Meyer. Orton talked about having a graphics-specific pc platform as one of the choices. Any chance you'd correct?




By KristopherKubicki (blog) on 7/25/2006 5:48:32 AM , Rating: 2
Hi geo1:

Thanks, I apologize if that caused anyone any confusion. I also cleaned up Meyer's quote a little earlier too, we truncated part of his comment.

Thank you,

Kristopher


By geo1 on 7/25/2006 9:22:17 AM , Rating: 2

No, thank you, Kristopher for cleaning it up when you became aware of it. Best. geo


An Idea
By SilthDraeth on 7/25/2006 3:52:42 PM , Rating: 3
So what if, let us say in 5-10 years. AMD builds a quad core processor, All four cores are physically identical. Each core though can function as either a general purpose cpu, a video processor, a physics processorby utilizing "Field programable Gate Array" technology similar what is described on this link http://www.progeniq.com/tech.html

"FPGAs are able to 'rewire' themselves on-the-fly, allowing for full hardware level reconfigurability. The processors are reconfigured on-the-fly, as and when a different stage in processing needs to be accelerated. This allows for maximum flexibility in adapting to the computational workflow requirements."

All four cores are aware of each other, and share cache and memory. This chip also has integrated RAM, sufficient to operate independently of any outside RAM.

Of course, having all that on a CPU would drive the CPU price up, but I am pretty sure that it would not even approacht he price of a CPU and independant video card.

This would be a system builders dream. For low end systems/work stations that are only used for word processiong, and powerpoint presentations, the onchip memory would be sufficient. For high end systems, external RAM could be installed and perform as an extension of the on DIE RAM.

Just as the little picture illustrates, General Purpose, and Media Centric could essentially be the same system. With software utilizing the cores differently. Same for Data Centric, and Graphics Centric, which I percieve to be a motherboard supporting dual quadcore chips, with software utilizing the cores.

The system builder would essentially only need to design one system, and based on the customer's needs, they would install one or two processors, any external memory requested, as well as the usual peripherals, such as, dvd drives, hard drives, etc.

Just food for thought.




RE: An Idea
By david99 on 7/27/2006 2:08:04 PM , Rating: 3
So what if, let us say in 5-10 years. AMD builds a quad core processor, All four cores are physically identical. Each core though can function as either a general purpose cpu, a video processor, a physics processor by utilizing "Field programable Gate Array" technology similar what is described on this link http://www.progeniq.com/tech.html

far better FPGAs exist today, the Kilocore™ with its 1,024 processing elements.

its interesting that AMD were licencing several IBM patents,
wonder if evolution of these KiloCORE might serve us well into the future on all future motherboards.

http://www.technewsworld.com/story/49772.html
" IBM, Rapport Unveil Energy-Wise Power Chip
IBM (NYSE: IBM) and Power.org member Rapport unveiled a new energy-efficient processor dubbed Kilocore that features more than 1,000 processing elements around a Power chip architecture.

Kilocore -- with parallel processing similar to that of the new Cell processor, Sun's Niagara, and Azul's multi-core chip -- has the ability to join hundreds or even thousands of parallel processing elements on a single chip that saves energy by cutting the distance for computing signals."


http://www.rapportincorporated.com/kilocore/kiloco...
"Kilocore™ Overview
Conventional technology is unable to meet the processing and power consumption requirements of many of today’s complex products such as audio, video and data processing “securely” on mobile devices. In addition, evolving standards and next generation functional requirements often move faster than today’s development design cycle creating product obsolescence before shipment.

Kilocore™ products can be upgraded in the field via software, enabling delivery of next generation features while creating potential downstream revenue models.

Dynamically Reconfigurable Computing

Kilocore™ processors use a powerful new parallel computing architecture that dramatically lowers power consumption for equivalent computational performance. Kilocore™ technology utilizes arrays of dynamically reconfigurable parallel processing elements optimized for performance. Each processing element in a Kilocore™ stripe can be dynamically reconfigured in one clock cycle. Kilocore™ tools support both dynamic reconfiguration of processing elements and formatting of all types of data. The unique Kilocore™ architecture provides the following benefits:

Flexibility: functions can be dynamically changed in a single clock cycle.
Performance: unprecedented performance via simultaneous computing of multiple functions.
Scalability: hundreds to thousands of processing elements on a single chip.
Efficiency: Extremely low power consumption.
Rapport's KC256 Chip utilizing Kilocore™ Architecture"





Socketed GPUs?
By AdamsJabbar on 7/24/2006 10:37:42 AM , Rating: 3
It hasn't been that long ago people were talking about socketed GPU solutions on motherboards. This socket would be dedicated to video with its own video memory (replaceable and upgradeable similar to system memory). With AMD providing the option of Socket 940 co-processors to sit directly on the Hypertransport bus (http://www.xtremedatainc.com/Products.html), this well could be the next step. This way, you don't have to re-buy the PCB, worry about expansion slots, etc. It is right there on the board already. However, this wouldn't allow for SLI setups unless they put multiple sockets onboard.




RE: Socketed GPUs?
By Tyler 86 on 7/27/2006 12:00:54 AM , Rating: 2
Yup.


Hmmm....
By IamKindaHungry on 7/25/2006 12:08:47 AM , Rating: 5
Somewhere in Santa Clara, at this moment there is a meeting going on, I sure its going something like this...


Executive #1: Did you hear AMD bought Ati today?

Executive #2: Yeah, I wonder how that is going to affect our business strategy in the coming years?

Executive #1: In the short-term I dont think we have too much to worry about, but you might want to tell the boys in legal that by 2010 they are going to have more work than they ever dreamed.

Executive #2: What are you talking about?

Executive #1: Well,not only do we have to give huge discounts so that OEM's dont use their processors but now we have to make sure nobody uses their chipsets as well!

<both laugh>

Executive #2: Considering we've been doing that for almost 10 years and they just figured it out, I'll let the boys in legal know they have nothing to worry about until 2015 at the earliest.




By azmodean on 7/24/2006 10:44:47 AM , Rating: 2
There would be a lot of trade-offs involved in moving a GPU on-die with the CPU:
PROS: shared cache, no latency between the processors, pooled system memory, easier integration for non-graphics tasks
CONS: Personally I think pooling system and graphics memory sounds like a bad idea, larger die = lower yields = higher cost, power supply and heat dissapation gets cranked up about 10 notches in difficulty, as does bus management

But the biggest pro I can think of is that it means ATI gets to use AMDs more advanced fabrication tech. The last I heard NVIDIA and ATI weren't even looking into moving from the 90nm fabrication process, but AMD sure as hell is.




By Lexic on 7/24/2006 10:48:09 AM , Rating: 2
Unfortunately, listening to the conference call, the fabrication situation isn't going to change all that quickly. ATi will remain fabless for now.

They are moving past 90nm as is though, now..though to 80nm, not 65.


CPU+GPU
By Targon on 7/24/2006 11:41:47 AM , Rating: 2
There may be things that can be added to the CPU that are "standard functions", but still keep a seperate video adapter for new additions. Think about it like a co-processor for the new stuff, but the old stuff which doesn't need to go faster could be on the CPU.

For example, TV tuner type stuff could be accelerated on the CPU die itself, with the render/decode phase on a seperate card or chip. This would allow for an easy upgrade of certain components without the need to replace the entire CPU.

AMD already has the specs for a HyperTransport slot called HTX as I recall, so it's also possible we will see graphics moving to this slot for a better interconnect between card, memory, and CPU. PCI Express may seem slow in comparison, and with a HyperTransport link between two HTX slots, you COULD do the equivilant of SLI/Crossfire without the need for a cable or special card.

When in doubt, consider that those who design GPUs and CPUs think differently from other people, and they may see better ways of doing things than you or I could. I see potential for both good and bad to come from this merger, but all things considered, I suspect more good will come to both GPU, CPU, and Chipset divisions.




RE: CPU+GPU
By Tyler 86 on 7/27/2006 12:10:19 AM , Rating: 2
I could see a CPU with basic video features, and a direct trace to analog & digital video out adapters...

Then off to the side, you have a 2nd CPU socket, filled by a GPU, with some graphics memory on the same package (to get a mental image, look up some of ATi's performance laptop integrated graphics, not the miniPCI/PCI-E/MXM or whatever cards)...
At 8 stacked GDDR4 RAM chips stacked in 2s around the center core, maybe 512MB to 1GB -- projecting a ways into the future, mind you -- with some heavy cooling...
Maybe a CPU-socket GPU package could be widened to extend above around the socket as well - there is certainly some breathing room in current socket designs...
.. but I can speculate all day on that, and see no fruitful results...

I think it should be noted though, that there is such a thing as a HTX slot, like Targon mentioned, that looks similar to a PCIE slot...


Good for nVidia
By BaronMatrix on 7/24/2006 12:50:05 PM , Rating: 2
This is actually good for nVidia in the sense that they won't have to pay for the initial research in doing a chipset and socketed GPU.

They can license the work AMD does and do their own socketed chip and their own chipset.

I was worried abotu what AMD would do with nVidia, but it seems liek they are saving lots of companies research costs for Torrenza devices. It seems like Torrenza is a major reason fo rthis move.

Now they can create the reference mobos for Torrenza AND 4x4 without using a third party. of course, this will not happen until Q107 at the earliest since they cannot combine business practice until the merger passes FTC scrutiny.


I owuld say that 4x4 will now have an ATi chipset when released in addition to the nForce 5xx that was supposedly used for the initial benches.




RE: Good for nVidia
By pnyffeler on 7/24/2006 6:33:09 PM , Rating: 2
That's assuming that AMDTI is willing to sell them a license....


This has bad news written all over it
By biohazard420420 on 7/24/2006 1:01:29 PM , Rating: 2
Ok this I think is going to be bad. If the merge the gpu and cpu on to one chip (die) now if you want faster graphics you have to buy a whole new cpu/gpu combination instead of keeping the cpu and trading up for a new gpu. Its only going to lock you into ATI gpus on AMD boards. So no choice in graphics if you want AMD chips




By Master Kenobi (blog) on 7/24/2006 1:46:41 PM , Rating: 2
Well its likely to divide the discreet industry into two camps. The Intel/nVidia camp, and the AMD/ATI camp, right now you can pick any 2 from the 4 and be on your way, but if you want bang for your buck, I suspect its going to be one or the other. I have a feeling Intel's new chipset in Q1,07 is likely to have integrated SLI support, and lack Crossfire support that the 975X has.


No More Intel and ATI?
By lealwai on 7/24/2006 4:08:49 PM , Rating: 2
What I wanna know is will ATI still be producing cards that are compatible w/ Intel. If not, wouldnt ATI be losing a lot of possible market share? the deal would make no sense to me. If they still do produce cards for both Intel and AMD, then i really couldn't care less if AMD owned them.




RE: No More Intel and ATI?
By Griswold on 7/25/2006 4:52:45 AM , Rating: 2
Why do you think a video card suddently stops working on a certain mobo just because of this merger? We're not talking about CF here...

ATIs (and nvidias) cards are compatible to PCIe (or AGP if you want, just for arguments sake) - and that is all there is to know about compatibility.


Uh oh...
By JWalk on 7/25/2006 2:07:48 AM , Rating: 2
I have this really bad feeling about this deal. I see it as good for AMD. But, from the perspective of the high-end GPU market, I see this as the beginning of the end for ATI. It will take a couple of years, but I see them eventually becoming the "chipset/integrated graphics division" for AMD, and that is all. I see them slowly turning away from the super-competitive high-end discrete graphics market. If this happens, I sure hope a new company steps up to challenge Nvidia, just to keep the market competitive. What I don't see happening is some totally new integrated super graphics that will push the discrete solutions out of the market. It doesn't make sense on many levels. I guess we will see in time.




RE: Uh oh...
By casket on 7/25/2006 9:01:30 AM , Rating: 2
Assuming no integration, ATI gives AMD... memory knowledge (GDDR3), motherboard knowledge, A large team of skilled engineers, and a management team capable of 6-month product cycles.

**************
Integration Possibilities:
Because of AMD's move to an integrated memory controller, and discussion of L3 cache and a co-processor...
Why not have ATI make the motherboard....
Stick 512 MB of GDDR3 on the motherboard(Shared by CPU and GPU)(doesn't Xbox360 do something like this with ATI Graphics?)
Put the ATI chip in the co-processor spot
Add L3 cache to be shared by both the CPU and the GPU.

You can still upgrade graphics by buying a new the ATI chip(if it is a co-processor) but upgrading to faster memory would still present problems.

L3 cache for the graphics card would seem to help with Graphics speed, though.


WOW
By phymon on 7/24/2006 10:07:51 AM , Rating: 2
This kind of processor will help AMD to perform better in the future.. low lactancy, gpu/cpu/chipset on the same die this thing is going to kick.. I hope that AMD will take good use of this technology.




Inevitable
By Lexic on 7/24/2006 10:28:07 AM , Rating: 2
I see this as AMD preparing for the inevitable - single processors handling traditional CPU & GPU responsibilities in the long run. GPUs are becoming more and more general purpose, CPUs are in ways becoming more GPU like (think Cell/Niagra and have a look at Intel's roadmap). There is increasing desire to use GPUs more generally too. Eventually there will come a point where the benefits of specialistion are outweighed by the need to flexibly use processing power, and the specialist hardware that remains of significant benefit can be put on the same die as the main processing core anyway.

CPU/GPU convergence will happen (and credibly for high-end users too), Intel is working toward it, AMD knew it, and so did ATi (who know doubt thinks its better to start on that road now than to be marginalised later as a discrete chip provider).




CPU+GPU
By Shadowed on 7/24/2006 12:07:23 PM , Rating: 2
I was under the impression that the 4x4 platforms goal would be to allow a 4core cpu in one socket and a 4core gpu in the other socket?




How about TBDR...
By Fox5 on 7/24/2006 3:00:04 PM , Rating: 2
If ATI and AMD make a tile based deferred render, its memory requirements would go down enough that a cache may be large enough for it, and its bandwidth requirements go down that system memory (say dual channel ddr3 delivering 42GB/s of bandwidth in 2008) would be enough for it.




Timna
By code65536 on 7/24/2006 4:46:19 PM , Rating: 2
So does anyone remember Intel's last attempt at this sort of grand integration (Timna)?
http://www.pcworld.com/news/article/0,aid,18726,00...




either way...
By crazydrummer4562 on 7/29/2006 8:41:07 PM , Rating: 2
i think both companies will benefit from the additional engineers.




Scarey
By shamgar03 on 7/24/2006 9:55:46 AM , Rating: 1
Now if AMD fails (I am not saying it will) we will lose all competition gpu and cpu developement...that would suck. Here's to AMD and Intel each having exactly 50% marketshare in processors.




So um....
By Regs on 7/24/2006 12:01:25 PM , Rating: 1
Xbox 360x360?




"I want people to see my movies in the best formats possible. For [Paramount] to deny people who have Blu-ray sucks!" -- Movie Director Michael Bay

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki