Print 58 comment(s) - last by Marvin L.. on Dec 11 at 8:00 PM

ATI reference design for Radeon X1950
Ready or not, here comes GDDR4

This week ATI sent an advisory out to its OEM partners announcing the details of the new Radeon X1950 and X1900 graphic cards.  Both of these new cards are based on the same R580 core, but with some fundamental differences.

R580, the 48 pixel-shader processor version of the R520 (Radeon X1800), was announced this past January. R580 features a robust memory controller capable of utilizing several different types of memory, including GDDR4 which was not even available when the Radeon X1900 was first announced.  Since then Hynix and Samsung have both jumped on the GDDR4 train with revenue shipments beginning several weeks ago.  The new GDDR4 variants of R580-based Radeons are now called Radeon X1950.  Radeon X1950 will retain all of the features of the Radeon X1900, and really only have the added benefit of a new cooler, GDDR4 memory and different frequency clocks.

Radeon X1950 at launch will come in two flavors: a high clock "XTX" version, and a CrossFire version.  Both cards feature 512MB GDDR4, and the only major difference between the two is that the CrossFire X1950 houses the composite engine and input interfaces for CrossFire. Just yesterday, ATI issued an advisory to its partners claiming "Clock frequencies for RADEON X1950 family products are pending and will be provided at a later date."  However, in March of this year ATI released a new policy for AIB partners to overclock X1000 series cores with some discretion. While we can already confirm some partners are planning 650MHz core versions, there is still a distinct possibility that higher clocked cards are also in the works. Memory clock frequencies have not been announced either, though Samsung announced its GDDR4 is already capable of 3.2GHz in 8x512Mbit configurations.

The new Radeon X1900 is a low-cost version of the existing Radeon X1900 that only uses 256MB of GDDR3, enabling the card access to the $300 price point.  The Radeon X1900XT 256MB will use the same clock frequencies as other Radeon X1900XT cards: 625MHz core and 1.45GHz memory.

ATI's advisory documentation claims the Radeon X1950XTX will begin sample availability on August 7, with the CrossFire sampling beginning exactly one week later. Sampling of the Radeon X1900XT 256MB will begin immediately.

Radeon X1900 and X1950 will be replaced by another ASIC core, dubbed R600.  R600 is expected to be 80nm with new design features above and beyond the R520 and R580 series.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

Is a question allowed?
By lemonadesoda on 7/21/2006 7:39:51 PM , Rating: 2
I don't know that I should be asking questions in a "comment on the news item" forum, but I'd be grateful for a quick answer:

?./ Why are GPU cores clocked so much slower than CPU's? We have cool and efficient CPU's running at 1500Mhz which is more than double the top end GPU's at 600-650Mhz. Is this a HEAT issue due to multi-parallelization in these cores (and hence their large current draw and high tempratures), or is there some other fundamental problem with the silicon?

RE: Is a question allowed?
By NextGenGamer2005 on 7/21/2006 8:04:10 PM , Rating: 4
Two reasons: 1) GPUs are much more complicated (ATI's R580 is has 384 million transistors, compared to half that for most CPUs). 2) When a GPU is being used, almost all of the transistors are being used. A 16 pipeline card has all 16 pipelines working constantly. This in stark contrast to a CPU, where really only 25-50% is actually "on" and working at any one time.

CPUs are also typcially made using more advanced techniques. For instance, Intel has been using 65-nm since January, where as both NVIDIA and ATI are still using 90-nm, with 80-nm right around the corner. By the time they switch to 65-nm in 1H 2007, Intel will be sampling 45-nm products for a 2H 20007 release.

BTW, if AMD does buy ATI, then ATI will have a huge advantage over NVIDIA in the manufacturing department, since I would assume that ATI would use AMD's fabs (which switch to more advanced processes faster then TSMC or UMC).

RE: Is a question allowed?
By Tyler 86 on 7/21/2006 8:10:47 PM , Rating: 2

Like CPUs, GPUs are 10-50% on during idling.
Like CPUs, GPUs are 100% on under load.

Construction, size, complexity, and activity are the limiting factors in clock speed.

Smaller manufacturing processes do nessiciarly not give more clock speed headroom.
Smaller manufacturing processes make the chips cheaper to manufacture.

By slashbinslashbash on 7/23/2006 2:20:47 AM , Rating: 3
I don't have exact numbers, but out of Conroe's 291M transistors at least 2/3rds of them are cache... based on fuzzy memories of transistor counts for previous processors, the actual CPU cores (cache excluded) of Conroe should take up no more than 50M-75M transistors. Compare that to GPU's which are over 300M transistors with relatively little of that being cache. That alone will tel you how much more complicated a GPU is compared to a CPU.

RE: Is a question allowed?
By Araemo on 7/26/2006 11:58:43 AM , Rating: 2
Another reason(related to # of transistors):
A GPU generally does more mathematical work per cycle than a CPU does. A CPU generally has 2-6 ALUs that perform one or two additions, multiplications, etc.. per clock. Maximum of 12 ops per clock, and they're relatively simple ops.

A GPU has between 8 and 16(Do any have 32 yet? I think some nVidia ones have 24?) pipelines, which each has multiple functional units, each unit performing a different operation, every single clock. Timing all that to run reliably takes more physical time, and the clock speed is limited based on the time it takes every functional unit in the pipeline to do one cycle of work. In CPUs, they have less functional units to tune, so it's easier to get the timing down low, especially in a reliable manner.

RE: Is a question allowed?
By Araemo on 7/26/2006 11:59:50 AM , Rating: 2
Oh, and another thing.. Since the CPU is using simpler math, it takes more cycles to do the same complexity of work. That is why it's faster to run a game on your 500 mhz gpu than on your 3000 mhz cpu, the GPU is tuned for one specific kind of work, so it doesn't need to have as high of a clock speed.

RE: Is a question allowed?
By Tyler 86 on 7/21/2006 8:04:12 PM , Rating: 2
You're in luck, I just happen to have a spare minute...

GPU cores are way more complex than CPUs.
They eat more power, produce more heat, do more work, and break easier.
Recent cores have 'many clocks' (as termed by nVidia) to cope with the increasing power draw and and heat.

I don't know if 'multi-parallelization' is a real... term?
Parallelization is a factor they're taking into consideration with their lower clock speeds.

RE: Is a question allowed?
By Tyler 86 on 7/21/2006 8:13:30 PM , Rating: 2
There are also limitations on the silicon, metal substrate, and structures (like 'gates').

RE: Is a question allowed?
By Sharky974 on 7/21/2006 8:19:12 PM , Rating: 2
I dont think any of these answers have cut to the core of the problem. For that we need a Beyond 3D member.

I believe one factor I remember once being mentioned was something like that, CPU's are much more custom and "hand designed" for higher speed. Whereas I guess, GPU's have parts that are more machine designed.

Aka you've got some incredibly complex internal pipeline, you can spend tons of man hours going in and hand optimizing it, get higher clockspeeds, or you can automate the process somewhat, have a program lay it out and save tons of time but be less efficient, which is what the GPU guys do.

Process cant have much to do with it. CPU's at 90nm where still 3.2 GHZ and up.

RE: Is a question allowed?
By Sharky974 on 7/21/2006 8:28:57 PM , Rating: 2
If nobody here posts the correct answer

Go here:

Register, post your question.

You will get the correct answer. There are guys there who know AMAZING stuff. There are guys there that even specialize in just the manufactoring side, and can on and on about gates, masks, silicon layers, etc.

I'm sure it's been answered before over there, but I cant figure out an effective way to search the question.

RE: Is a question allowed?
By smitty3268 on 7/21/2006 11:06:11 PM , Rating: 2
1. The operations that GPU's perform are much more complex than the simple arithmetic that a CPU mostly does. It is also much more parallelized. This means the designs are much more complex, and that makes things much more difficult to ramp up clock speed. It isn't a linear relationship, a little bit more complicated makes things much more difficult.

2. Hand optimization is used some in CPU designs. This is possible because their designs are simpler, and because designs can last for years. GPUs are released on a much faster schedule which doesn't provide the time for as many custom optimizations.

3. GPUs are usually a bit behind on the manufacturing process as well. I'm not sure if this is just an issue of money or if this is to keep yields up since their designs are more complex than CPU designs.

RE: Is a question allowed?
By Tyler 86 on 7/24/2006 12:31:20 AM , Rating: 2
1; We covered that here.

2; Balogna. Hand optimizations are used where hand optimizations are needed. GPU or CPU, Microchip or PCB, Hardware or Software.

3; The manufacturing process is behind due only to a tech gap between Intel, AMD, and TSMC (whom nVidia & ATi use to produce their chips).

Sometimes smaller processes will have lower yields than their larger processes, or they will create hot spots, or have other electrical issues. Sometimes the chip's architecture has to be redesigned to cope.

The larger processes are more refined, and typicly have stable (still not exactly high) yields, which is better for the bottom line.

The TSMC tiers it's manufacturing processes' pricing to it's quality and/or size. Sometimes it's cheaper for ATi or nVidia to stay with a larger manufacturing process.

Intel is simply it's own manufacturer, unlike the 2 major graphics players.
AMD is also it's own manufacturer, yet they're lagging behind in manufacturing process area compared to Intel.

RE: Is a question allowed?
By Jkm3141 on 7/26/2006 1:21:19 AM , Rating: 2 Yes its the inquirer but it has good info about how a GPU is more custom made logic than standard logic found on a CPU. From what i got from the artical is that custom logic is much smarter and effecitent but yet cant run quite as fast.

RE: Is a question allowed?
By Jkm3141 on 7/26/2006 1:23:17 AM , Rating: 2
my memory served me bad, i meant the custom logic on the cpu is more refined and can run faster than the GPU. but read the artical anyway, might be easier than beyond3d, though beyond3d is the most accurate.

RE: Is a question allowed?
By Targon on 7/31/2006 10:46:16 AM , Rating: 2
If you look at the design of a GPU, you have the number of pixel pipelines, and the pixel and vertex shaders. As a result, the newer GPUs suck up more and more power to provide the greater number of pixel pipes, and shader units. This power increases the heat build-up, and thus limits how high the clocks can go.

Now, going to a new and improved fabrication process takes time. CPUs don't advance in overall design as quickly as GPUs do, so the time it takes to transition from 90nm to 65nm isn't as critical(except for AMD at this point in time). But to boost the clock rate of a GPU while also increasing the number of pipelines and shaders requires an improvement in the process technology.

Keep in mind that since you can get a huge boost to graphics performance just by adding pixel pipelines without doing anything else, you don't really NEED to boost the clockspeed of a GPU to improve performance. With a CPU, you NEED to boost the clockrate to see a performance increase unless you do a major design improvement.

ATI and NVIDIA could come out with a card in six months that has four times the performance of current graphics chips just by adding a ton of pixel pipelines to current designs. This might require a much larger chip that requires more power, but they could do it quickly. CPU manufacturers couldn't do that.

"I'd be pissed too, but you didn't have to go all Minority Report on his ass!" -- Jon Stewart on police raiding Gizmodo editor Jason Chen's home
Related Articles
Samsung Shipping Production GDDR4
July 5, 2006, 10:00 AM
Hynix to Refocus on Graphics Memory
May 30, 2006, 2:46 PM
ATI's New Stance On Overclocking
March 30, 2006, 12:15 PM

Most Popular ArticlesSmartphone Screen Protectors – What To Look For
September 21, 2016, 9:33 AM
UN Meeting to Tackle Antimicrobial Resistance
September 21, 2016, 9:52 AM
Walmart may get "Robot Shopping Carts?"
September 17, 2016, 6:01 AM
5 Cases for iPhone 7 and 7 iPhone Plus
September 18, 2016, 10:08 AM
Update: Problem-Free Galaxy Note7s CPSC Approved
September 22, 2016, 5:30 AM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki