Print 58 comment(s) - last by Marvin L.. on Dec 11 at 8:00 PM

ATI reference design for Radeon X1950
Ready or not, here comes GDDR4

This week ATI sent an advisory out to its OEM partners announcing the details of the new Radeon X1950 and X1900 graphic cards.  Both of these new cards are based on the same R580 core, but with some fundamental differences.

R580, the 48 pixel-shader processor version of the R520 (Radeon X1800), was announced this past January. R580 features a robust memory controller capable of utilizing several different types of memory, including GDDR4 which was not even available when the Radeon X1900 was first announced.  Since then Hynix and Samsung have both jumped on the GDDR4 train with revenue shipments beginning several weeks ago.  The new GDDR4 variants of R580-based Radeons are now called Radeon X1950.  Radeon X1950 will retain all of the features of the Radeon X1900, and really only have the added benefit of a new cooler, GDDR4 memory and different frequency clocks.

Radeon X1950 at launch will come in two flavors: a high clock "XTX" version, and a CrossFire version.  Both cards feature 512MB GDDR4, and the only major difference between the two is that the CrossFire X1950 houses the composite engine and input interfaces for CrossFire. Just yesterday, ATI issued an advisory to its partners claiming "Clock frequencies for RADEON X1950 family products are pending and will be provided at a later date."  However, in March of this year ATI released a new policy for AIB partners to overclock X1000 series cores with some discretion. While we can already confirm some partners are planning 650MHz core versions, there is still a distinct possibility that higher clocked cards are also in the works. Memory clock frequencies have not been announced either, though Samsung announced its GDDR4 is already capable of 3.2GHz in 8x512Mbit configurations.

The new Radeon X1900 is a low-cost version of the existing Radeon X1900 that only uses 256MB of GDDR3, enabling the card access to the $300 price point.  The Radeon X1900XT 256MB will use the same clock frequencies as other Radeon X1900XT cards: 625MHz core and 1.45GHz memory.

ATI's advisory documentation claims the Radeon X1950XTX will begin sample availability on August 7, with the CrossFire sampling beginning exactly one week later. Sampling of the Radeon X1900XT 256MB will begin immediately.

Radeon X1900 and X1950 will be replaced by another ASIC core, dubbed R600.  R600 is expected to be 80nm with new design features above and beyond the R520 and R580 series.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

Is a question allowed?
By lemonadesoda on 7/21/2006 7:39:51 PM , Rating: 2
I don't know that I should be asking questions in a "comment on the news item" forum, but I'd be grateful for a quick answer:

?./ Why are GPU cores clocked so much slower than CPU's? We have cool and efficient CPU's running at 1500Mhz which is more than double the top end GPU's at 600-650Mhz. Is this a HEAT issue due to multi-parallelization in these cores (and hence their large current draw and high tempratures), or is there some other fundamental problem with the silicon?

RE: Is a question allowed?
By NextGenGamer2005 on 7/21/2006 8:04:10 PM , Rating: 4
Two reasons: 1) GPUs are much more complicated (ATI's R580 is has 384 million transistors, compared to half that for most CPUs). 2) When a GPU is being used, almost all of the transistors are being used. A 16 pipeline card has all 16 pipelines working constantly. This in stark contrast to a CPU, where really only 25-50% is actually "on" and working at any one time.

CPUs are also typcially made using more advanced techniques. For instance, Intel has been using 65-nm since January, where as both NVIDIA and ATI are still using 90-nm, with 80-nm right around the corner. By the time they switch to 65-nm in 1H 2007, Intel will be sampling 45-nm products for a 2H 20007 release.

BTW, if AMD does buy ATI, then ATI will have a huge advantage over NVIDIA in the manufacturing department, since I would assume that ATI would use AMD's fabs (which switch to more advanced processes faster then TSMC or UMC).

RE: Is a question allowed?
By Tyler 86 on 7/21/2006 8:10:47 PM , Rating: 2

Like CPUs, GPUs are 10-50% on during idling.
Like CPUs, GPUs are 100% on under load.

Construction, size, complexity, and activity are the limiting factors in clock speed.

Smaller manufacturing processes do nessiciarly not give more clock speed headroom.
Smaller manufacturing processes make the chips cheaper to manufacture.

By slashbinslashbash on 7/23/2006 2:20:47 AM , Rating: 3
I don't have exact numbers, but out of Conroe's 291M transistors at least 2/3rds of them are cache... based on fuzzy memories of transistor counts for previous processors, the actual CPU cores (cache excluded) of Conroe should take up no more than 50M-75M transistors. Compare that to GPU's which are over 300M transistors with relatively little of that being cache. That alone will tel you how much more complicated a GPU is compared to a CPU.

RE: Is a question allowed?
By Araemo on 7/26/2006 11:58:43 AM , Rating: 2
Another reason(related to # of transistors):
A GPU generally does more mathematical work per cycle than a CPU does. A CPU generally has 2-6 ALUs that perform one or two additions, multiplications, etc.. per clock. Maximum of 12 ops per clock, and they're relatively simple ops.

A GPU has between 8 and 16(Do any have 32 yet? I think some nVidia ones have 24?) pipelines, which each has multiple functional units, each unit performing a different operation, every single clock. Timing all that to run reliably takes more physical time, and the clock speed is limited based on the time it takes every functional unit in the pipeline to do one cycle of work. In CPUs, they have less functional units to tune, so it's easier to get the timing down low, especially in a reliable manner.

RE: Is a question allowed?
By Araemo on 7/26/2006 11:59:50 AM , Rating: 2
Oh, and another thing.. Since the CPU is using simpler math, it takes more cycles to do the same complexity of work. That is why it's faster to run a game on your 500 mhz gpu than on your 3000 mhz cpu, the GPU is tuned for one specific kind of work, so it doesn't need to have as high of a clock speed.

RE: Is a question allowed?
By Tyler 86 on 7/21/2006 8:04:12 PM , Rating: 2
You're in luck, I just happen to have a spare minute...

GPU cores are way more complex than CPUs.
They eat more power, produce more heat, do more work, and break easier.
Recent cores have 'many clocks' (as termed by nVidia) to cope with the increasing power draw and and heat.

I don't know if 'multi-parallelization' is a real... term?
Parallelization is a factor they're taking into consideration with their lower clock speeds.

RE: Is a question allowed?
By Tyler 86 on 7/21/2006 8:13:30 PM , Rating: 2
There are also limitations on the silicon, metal substrate, and structures (like 'gates').

RE: Is a question allowed?
By Sharky974 on 7/21/2006 8:19:12 PM , Rating: 2
I dont think any of these answers have cut to the core of the problem. For that we need a Beyond 3D member.

I believe one factor I remember once being mentioned was something like that, CPU's are much more custom and "hand designed" for higher speed. Whereas I guess, GPU's have parts that are more machine designed.

Aka you've got some incredibly complex internal pipeline, you can spend tons of man hours going in and hand optimizing it, get higher clockspeeds, or you can automate the process somewhat, have a program lay it out and save tons of time but be less efficient, which is what the GPU guys do.

Process cant have much to do with it. CPU's at 90nm where still 3.2 GHZ and up.

RE: Is a question allowed?
By Sharky974 on 7/21/2006 8:28:57 PM , Rating: 2
If nobody here posts the correct answer

Go here:

Register, post your question.

You will get the correct answer. There are guys there who know AMAZING stuff. There are guys there that even specialize in just the manufactoring side, and can on and on about gates, masks, silicon layers, etc.

I'm sure it's been answered before over there, but I cant figure out an effective way to search the question.

RE: Is a question allowed?
By smitty3268 on 7/21/2006 11:06:11 PM , Rating: 2
1. The operations that GPU's perform are much more complex than the simple arithmetic that a CPU mostly does. It is also much more parallelized. This means the designs are much more complex, and that makes things much more difficult to ramp up clock speed. It isn't a linear relationship, a little bit more complicated makes things much more difficult.

2. Hand optimization is used some in CPU designs. This is possible because their designs are simpler, and because designs can last for years. GPUs are released on a much faster schedule which doesn't provide the time for as many custom optimizations.

3. GPUs are usually a bit behind on the manufacturing process as well. I'm not sure if this is just an issue of money or if this is to keep yields up since their designs are more complex than CPU designs.

RE: Is a question allowed?
By Tyler 86 on 7/24/2006 12:31:20 AM , Rating: 2
1; We covered that here.

2; Balogna. Hand optimizations are used where hand optimizations are needed. GPU or CPU, Microchip or PCB, Hardware or Software.

3; The manufacturing process is behind due only to a tech gap between Intel, AMD, and TSMC (whom nVidia & ATi use to produce their chips).

Sometimes smaller processes will have lower yields than their larger processes, or they will create hot spots, or have other electrical issues. Sometimes the chip's architecture has to be redesigned to cope.

The larger processes are more refined, and typicly have stable (still not exactly high) yields, which is better for the bottom line.

The TSMC tiers it's manufacturing processes' pricing to it's quality and/or size. Sometimes it's cheaper for ATi or nVidia to stay with a larger manufacturing process.

Intel is simply it's own manufacturer, unlike the 2 major graphics players.
AMD is also it's own manufacturer, yet they're lagging behind in manufacturing process area compared to Intel.

RE: Is a question allowed?
By Jkm3141 on 7/26/2006 1:21:19 AM , Rating: 2 Yes its the inquirer but it has good info about how a GPU is more custom made logic than standard logic found on a CPU. From what i got from the artical is that custom logic is much smarter and effecitent but yet cant run quite as fast.

RE: Is a question allowed?
By Jkm3141 on 7/26/2006 1:23:17 AM , Rating: 2
my memory served me bad, i meant the custom logic on the cpu is more refined and can run faster than the GPU. but read the artical anyway, might be easier than beyond3d, though beyond3d is the most accurate.

RE: Is a question allowed?
By Targon on 7/31/2006 10:46:16 AM , Rating: 2
If you look at the design of a GPU, you have the number of pixel pipelines, and the pixel and vertex shaders. As a result, the newer GPUs suck up more and more power to provide the greater number of pixel pipes, and shader units. This power increases the heat build-up, and thus limits how high the clocks can go.

Now, going to a new and improved fabrication process takes time. CPUs don't advance in overall design as quickly as GPUs do, so the time it takes to transition from 90nm to 65nm isn't as critical(except for AMD at this point in time). But to boost the clock rate of a GPU while also increasing the number of pipelines and shaders requires an improvement in the process technology.

Keep in mind that since you can get a huge boost to graphics performance just by adding pixel pipelines without doing anything else, you don't really NEED to boost the clockspeed of a GPU to improve performance. With a CPU, you NEED to boost the clockrate to see a performance increase unless you do a major design improvement.

ATI and NVIDIA could come out with a card in six months that has four times the performance of current graphics chips just by adding a ton of pixel pipelines to current designs. This might require a much larger chip that requires more power, but they could do it quickly. CPU manufacturers couldn't do that.

Less breakneck pace
By segagenesis on 7/21/2006 6:38:42 PM , Rating: 3
I might be alone in saying this but I think its good that graphics manufacturers are making more of existing chip designs this year than pushing the next generation so quickly.

RE: Less breakneck pace
By Sharky974 on 7/21/2006 7:30:56 PM , Rating: 2
It's about the same as always.

They make a new chip (7800 GTX, X1800)

They refine and overclock that same basic design six months later (7900 GTX, X1900)

They make a brand new chip six months after that (G80, R600).

They refine and overclock the new chip six months later (this is the phase once termed as the "spring refresh").

So, either brand new chip or refresh every six months. Brand new chips about every twelve months. Timing may differ somewhat.

It hasn't changed much. Considering G80 is supposed to fairly close. I guess 7800 actually came out in June 05, and G80 will be in September I think though it's too early to say, so that's more than 12 months, but on the other hand Ati's last refresh (X1900) was only 3-4 months after the X1800, so as I say, the timing can vary slightly. R600 wont be any earlier than October or Novemeber, so again you're looking at about a year since X1800.

In other words no, it's not slowing down, as far as I can tell.

RE: Less breakneck pace
By ElFenix on 7/21/2006 9:24:36 PM , Rating: 3
you forget that the 7800 was essentially a respin of the 6800. so, now ati is on the refresh of a refresh that nvidia is curently at.

RE: Less breakneck pace
By mongoosesRawesome on 7/21/2006 10:55:15 PM , Rating: 2
7800 respin of 6800? are you crazy?

RE: Less breakneck pace
By Bull Dog on 7/22/2006 3:19:33 AM , Rating: 2
No, he is not crazy. The 7x00 cards were evolutionary rather revolutionary comparing to the 6x00 series cards.

RE: Less breakneck pace
By Sharky974 on 7/22/2006 3:43:41 AM , Rating: 4
I think that's stretching the definition of "evolutionary", though I sort of see where you're coming from.

They added 8 more pipes and even on a more basic level, made major ALU changes to improve the 7800GTX, (added MADD capability to one of the ALU's that does not have it in a 6800 pipe) overall it was a pretty massive redesign, not exactly a "respin".

7900 GTX on the other hand, was nothing more than a basic upclock and refinement of 7800 on a new manufactoring process.

RE: Less breakneck pace
By Goty on 7/23/2006 10:27:31 PM , Rating: 2
The X1800 and X1900 were in development at the same time. The reason R580 came out so soon after R520 was because R520 has some big IP no-nos (to use a technical term).

RE: Less breakneck pace
By AndreasM on 7/21/2006 8:39:49 PM , Rating: 2
I might be alone in saying this but I think its good that graphics manufacturers are making more of existing chip designs this year than pushing the next generation so quickly.

I disagree. The longer it takes for them to "push the next generation", the longer it takes for the current generation to drop in price.

RE: Less breakneck pace
By elscorpio666 on 8/1/2006 1:32:07 AM , Rating: 2
Graphic card manufacturers should consider making more cards silent rather.

Passive cooling is the code word for this

Let's have a round of applause...
By Engine of End on 7/21/2006 9:47:59 PM , Rating: 2
For robust power requirements! I am getting rather tired the ever-increasing power requirements of these cards. It's only going to get worse when the DirectX 10 cards come to town. Over 300W on load? No thanks.

ATi/nVidia should be following Intel's/AMD's example; focusing on lowering power requirements.

RE: Let's have a round of applause...
By irsyz on 7/21/2006 10:17:03 PM , Rating: 2
they need to have their own prescott scenario before they hit that plato in design change

RE: Let's have a round of applause...
By Goty on 7/26/2006 5:38:03 PM , Rating: 2

And on a side note, both companies have already stated that they've committed themselves to reducing the thermal envelope of their chips after the first generation of DX10 cards.

By shabby on 7/21/2006 10:57:43 PM , Rating: 3
I could of swore nvidia focused on efficiency with the 7900, it has less transistors then the previous gen, runs cooler and uses less power.

RE: Let's have a round of applause...
By syne24 on 7/22/2006 1:38:00 AM , Rating: 3
Totally agreed 100%

ATI/Nvidia needs to start getting more efficient. That's one of the reason I'm NOT going Quad-SLI. There is simple no need for a 1KW power supply for a computer to run daily at all; gaming desktop or not. That is a redicuously amount of power. They even have the nerve to suggest a personal power supply just for the graphic cards. Bottom line is instead of chasing clock speed, they need to add some fine tuning to it. I'd like to see ATI/Nvidia taking power consumption into some serious consideration down the road. Double stacking GPU's and dual SLI is NOT the solution.

RE: Let's have a round of applause...
By Jkm3141 on 7/26/2006 1:28:01 AM , Rating: 2
who started dual stacking GPU's?

Its better to have one smart GPU than 2 stupid ones based on power consumption and in some cases performance. granted single GPU's will of course require lots of power in the future, but if u buy one of those beasts like the R600 you *GASP* might not need SLI to run decent frame rates. Hell even a R580 u dont need dual gpu to do it. My friend with an X1900XT can run anything he wants at any resolution he can with as much AA as he wants (max res is 1600x1200). This directly contrasts a nVidia marketing slide i saw saying you HAVE to have SLI to run a game at 1280x1024 with 4xaa or even some 1024x768 with4xaa. I am glad they didnt say you needed QuadSLI for 1600x1200 with 4xaa. the picture im talking about i uploaded to imageshack here:

By Mojo the Monkey on 8/3/2006 4:40:11 PM , Rating: 2
thats marketing, not reality.

By Sharky974 on 7/21/2006 7:35:49 PM , Rating: 1
According to the INQ a few months ago this was supposed to be the new performance leader and clocked at 700 mhz+ core, possibly 750.

It's disappointing that we now here stock is 650, same as before, though we may get some vendor overclocks, those haven't worked very well for ATI in the past.

Overall it's disappointing.

But the 256 X1900XT makes tons of sense. ATI has been really stupid and trying hard to lose in a lot of ways lately, such as purposefully crippling their cards (including high end) that could easily be much faster with TMU limitations. But another way is they've been consistently throwing out 512 cards to compete against cheaper 256 Nvidia cards. So yeah, the 256 X1900XT is a great idea.

RE: Meh
By shadowzz on 7/21/2006 7:46:48 PM , Rating: 2
According to the INQ a few months ago this was supposed to be the new performance leader and clocked at 700 mhz+ core, possibly 750.

Did it come with photoshopped images?


Yeah nobody likes the Inq
By Sharky974 on 7/21/2006 7:51:51 PM , Rating: 2
Unless they post good Nvidia rumors, I've noticed. Then nobody says anything.

By Sharky974 on 7/21/2006 7:54:21 PM , Rating: 2

*Inq posts Nvidia's next GPU has taped out and will be very powerful*


That's basically the case repeated across the internet ad infinitum.

RE: Meh
By Fenixgoon on 7/21/2006 8:43:42 PM , Rating: 2
ATI has a totally different architecture though. it's not like ATI has the exact same cards as Nvidia, but somehow needs 512mb ram to compete with them

RE: Meh
By jmke on 7/27/2006 9:28:09 AM , Rating: 2
but somehow that statement is completely untrue regarding onboard memory.

GDDR4 !!!
By S3anister on 7/21/06, Rating: 0
RE: GDDR4 !!!
By sandytheguy on 7/21/2006 7:20:52 PM , Rating: 2
I wish CPUs would have skipped DDR2 altogether and gone straight to DDR3. I'm not a big fan of having many different types of memory laying around.

RE: GDDR4 !!!
By ChronoReverse on 7/21/2006 7:52:47 PM , Rating: 4

GDDR generations aren't the same as DDR.

GDDR3 is pretty much DDR2.

RE: GDDR4 !!!
By Targon on 7/31/2006 10:34:21 AM , Rating: 2
I was under the impression that GDDR3 has a lower latency than DDR2. It's not just the clock frequencies that need to be looked at.

anyone bet they licensed HIS' cooler?
By ElFenix on 7/21/2006 9:29:35 PM , Rating: 2
makes sense. ATi already has an AiB vendor that's done the work in makeing a cooler that is quiet and works well. don't need to reinvent the wheel and it adds to HIS' bottom line.

RE: anyone bet they licensed HIS' cooler?
By theprodigalrebel on 7/21/2006 9:51:52 PM , Rating: 2
I thought HIS's IceQ3 cooler was a reworked Arctic Cooling Silencer?

By Hare on 7/24/2006 2:29:04 AM , Rating: 2
AC silencer with a different sticker :P

Regarding Crossfire
By plimogs on 7/24/2006 8:58:01 AM , Rating: 2
Does anyone think that there's an off-chance for crossfire compatibility between the x1950 and x1900? As to whether or not this would be a good idea is another question...

RE: Regarding Crossfire
By Master Kenobi on 7/24/2006 11:27:32 AM , Rating: 2
Since its still in the X19 family, probably. A good idea? Maybe. But if the clock speeds are radically different I'd say no, if they are pretty similar (and they most likely will be), it wont matter much.

RE: Regarding Crossfire
By Jkm3141 on 7/26/2006 1:30:10 AM , Rating: 2
i seriously doubt it. I seem to recall in the first talks of crossfire the only thing you needed was the same number of pipelines, and ram amount (and i just assume same ram type if amount is required). It wouldnt make sense to have GDDR3 and 4 work together like that, they are quite diffrent. clockspeed doesnt matter so much at all. like an X800XL works fine w/ an X800XTPE even though the clocks are quite diffrent in crossfire (though if u can say the X8** generation worked fine you are a fool, though any crossfire or SLI is a waste of money IMHO)

By theteamaqua on 7/21/2006 6:50:04 PM , Rating: 2
so this is like X1900, but cheaper n better VRAM? if yes then Im in, RD600 + this + Conroe should be fun !!

but then again G80 is com;n in 2months, so .. ill wait n see

By theteamaqua on 7/21/2006 6:50:17 PM , Rating: 2
i mean X1900XTX

By yacoub on 7/21/2006 6:56:00 PM , Rating: 2
The new Radeon X1900 is a low-cost version of the existing Radeon X1900 that only uses 256MB of GDDR3, enabling the card access to the $300 price point. The Radeon X1900XT 256MB will use the same clock frequencies as other Radeon X1900XT cards: 625MHz core and 1.45GHz memory.

This needs clarification. How exactly does the new X1900 compare to the existing X1900XT? Half the RAM and still DDR3 RAM, just simply a lower price-point of under $300 instead of $360 for an X1900XT? Uhm yeah I'd pay the extra $50-60 and get an XT with double the memory, thanks.

RE: huh?
By KristopherKubicki on 7/21/2006 7:14:15 PM , Rating: 2
It's exactly the same, with half the memory.

Ringbus Finally Useful?
By Slaimus on 7/25/2006 1:37:29 PM , Rating: 2
Looks like now that the memory runs signficantly faster than the core, the ringbus can finally be utilized fully. Maybe 3.2GHz 256-bit can travel as 1.6GHz 512-bit on the ringbus?

RE: Ringbus Finally Useful?
By sircuit on 7/26/2006 11:37:36 PM , Rating: 2
Whats more interesting is when are they going to offer Crossfire on a single card. With GDDR4 at full speed theres over 102GB of bandwidth available, plenty to have 2 GPUs fed at once. I'm sure AMD can muster up some kind of HT for the GPUs as well.

By theprodigalrebel on 7/21/2006 8:04:00 PM , Rating: 3
I must say...that is one beautiful looking video card.

By MemberSince97 on 7/21/2006 8:01:21 PM , Rating: 2
Thanks for the first pic Kris...

Catalyst reveals more
By ImmortalZ on 7/29/2006 2:54:32 AM , Rating: 2
From the Catalyst 6.7 release notes :

Hot Plugging an HDMI TV to the DVI port of an ATI Radeon X19x0 series may result in the HDMI detection message box failing to appear under the Windows Media Center Edition operating system. Further details can be found in topic number 737-22809


x1950 Crossfire configurations?
By Marvin L on 12/11/2006 8:00:16 PM , Rating: 2
i bought 2 ati x1950 crossfire Edition video cards, this week and now i'm not sure that the 2 crossfire Edition will work together or if i need one card to be a x1950 xtx

By AppaYipYip on 7/21/06, Rating: 0
"It's okay. The scenarios aren't that clear. But it's good looking. [Steve Jobs] does good design, and [the iPad] is absolutely a good example of that." -- Bill Gates on the Apple iPad
Related Articles
Samsung Shipping Production GDDR4
July 5, 2006, 10:00 AM
Hynix to Refocus on Graphics Memory
May 30, 2006, 2:46 PM
ATI's New Stance On Overclocking
March 30, 2006, 12:15 PM

Most Popular ArticlesAre you ready for this ? HyperDrive Aircraft
September 24, 2016, 9:29 AM
Leaked – Samsung S8 is a Dream and a Dream 2
September 25, 2016, 8:00 AM
Yahoo Hacked - Change Your Passwords and Security Info ASAP!
September 23, 2016, 5:45 AM
A is for Apples
September 23, 2016, 5:32 AM
Walmart may get "Robot Shopping Carts?"
September 17, 2016, 6:01 AM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki