backtop


Print 118 comment(s) - last by Targon.. on Mar 14 at 8:51 AM

Six weeks from now, the world will get the first retail Radeon X2900 XTX

Late yesterday DailyTech was briefed on the final details for the upcoming R600 retail specifications, just in time for everyone to go on vacation for Chinese New Year.  AMD has briefed its board partners on the specifications that will appear on the marketing material for the card launches.

AMD's guidance claims R600 will feature 700 million transistors.  By comparison, the Radeon X1900 series R580 GPU incorporated 384 million transistors into its design; the half-generation before that, R520, only featured 320 million.  

As disclosed by DailyTech earlier this year, the GPU features a full 512-bit memory interface with support for GDDR3 and GDDR4.  R580 was also similar in this regard as it supported GDDR3 and GDDR4. 

The R600 boasts 320 steam processors.  ATI does not clearly define what a steam processor is, though insiders claim 64, 4-way unified shaders would be 256 stream processors (64 shaders, 4 interfaces each). 

According to company guidance, on March 30, 2007, AMD will initially debut the R600 as the ATI Radeon X2900 XTX in two separate configurations: one for OEMs and another for retail.  The OEM version is the full length 12" card that will appear in high-end systems.

ATI guidance claims the X2900 XTX retail card comes as a two-slot, 9.5" design with a vapor chamber cooler.  Vapor chambers are already found on high-end CPU coolers, so it would be no surprise to see such cooling on a high-end GPU either.  The OEM version of the card is a 12" layout and features a quiet fan cooler. 

1GB of GDDR4 memory is the reference configuration for Radeon X2900 XTX.  Memory on the reference X2900 XTX cards was supplied by Samsung.

Approximately one month later, the company will launch the GDDR3 version of the card.  This card, dubbed the Radeon X2900 XT, features 512MB of GDDR3 and lower clock frequencies than the X2900 XTX.  The X2900 XT is also one of the first Radeons to feature heatpipes on the reference design. 

AMD anticipates the target driver for X2900 XT to be Catalyst 8.36.  WHQL release of the X2900 XTX drive will appear around the Ides of March.

Radeon X2900 will feature native CrossFire support via an internal bridge interface -- there is no longer a need for the external cable found on the Radeon X1000 series CrossFire.  There is no Master card, as was the case with other high-end CrossFire setups. Any Radeon X2900 can act as the Master card.

A much anticipated feature, native HDMI, will appear on all three versions of Radeon X2900.

One 6-pin and one 8-pin (2x4) VGA power connectors are featured on Radeon X2900, but both connectors are also backwards compatible with 6-pin power supply cables.

AMD claims the R600 target schedule will be a hard launch -- availability is expected to be immediate.  Board partners will be able to demonstrate R600 at CeBIT 2007 (March 15 - 21), but the only available cards will be reference designs. 

Why was there such discrepancy with the board layouts and designs up until now?  An ATI insider, who wished to remain nameless, states  "The original Quad-Stealth design is what we build the R600 on: GDDR4, full-length and dual-slot cooling.  As the silicon further revised, [ATI] took up several alternative designs which eventually included GDDR3 and heatpipes into the specification.  The release cards demonstrate the versatility of R600 in each of these unique setups."

Final clock frequencies will likely remain estimates until later this month.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By EastCoast on 2/16/2007 11:56:58 PM , Rating: 0
What is the point of salivating over this card if the power consumption is greater then it's overall performance? I mean for the amount of power required to run these cards what will they equate to in performance? This is the most important aspect about R600, IMO. If the R600 only performs 10 FPS (for example) faster then the 8800GTX then it's worth it to get the 8800 instead. This is do to 8800 uses less power and cheaper price and, I am not interested in DX10 at this time.

I can understand people's excitement over the R600. However, I do not understand the willingness to buy this without knowing what kind of performance to expect with the increased power consumption. Another cause for concern is the minimal 12V rail requirement for this card? Will it be better to get a single rail PSU or will you still be safe with your 4 and 6 rail PSU?




By paydirt on 2/19/2007 9:38:53 AM , Rating: 2
Either buy or don't buy, your card will likely be obsolete in a few years. Is the obsolescence planned? Hmmm.


By timmiser on 2/20/2007 9:42:48 PM , Rating: 2
I understand what you are saying but there is one thing to keep in mind. With this new round of high powered video cards mean you will most likely have to invest in a multi-rail power supply that runs upwards to $300-$500! In addition to the need for increased wattage, you won't be able to boot up with a single rail power supply regardless of how many watts it is.

I have an 8800 GTX which requires two PCI-E direct feeds from the PS. My 500w single rail high quality PS with power adapters could not feed the video card what it needed and I had to upgrade to a triple rail PS and I'm not even planning on going SLI. If you did want to SLI your 8800 GTX's, you'd need a PS with 4 PCI-E plugs to run optimally. Those power supplies are pricy.


By Whedonic on 2/17/2007 12:23:58 AM , Rating: 5
Agreed. While Intel and AMD have been making CPUs cooler and more efficient, it looks like the current GPU tactic is to pack as many transistors on the board as possible...use brute force to boost performance. I seriously hope that there's a new architecture in the works for the next cycle that won't require 12 inches and 700 million transistors.


By saratoga on 2/17/2007 1:19:17 AM , Rating: 2
Not likely. For perfectly parallel problems with graphics, theres a fairly hard limit on performance/watt for a given nm process. You can give up clock speed, or give up shaders, but that drops your performance almost proportionally.


By Trogdor on 2/19/2007 11:42:45 AM , Rating: 2
The retail R600 will not be 12" long, that's only for the OEM/SI card. Retail cards will be 9.5" dual slot. Around the same size/possibly slightly shorter than the 8800.


By Targon on 3/14/2007 8:44:17 AM , Rating: 2
With a CPU, there is only so much of a performance increase you can get by adding more cores. With a GPU, the more pixel pipelines that are added will add that much more to the performance of the video card. That's why graphics processors have continued to increase in transistor count, because of the continued increase in the number of shaders and pixel pipelines.

Now, an interesting side-effect of the AMD/ATI merger is that we may see Radeon GPUs move to 65nm and 45nm a LOT faster than previous transitions. That should help in the size/power department. It won't help with the increase in the data path between GPU and video memory. Remember the R600 is supposed to use a 512 bit connection to memory, which is only part of the reason for the size of the cards.

When you need to fit a gig of memory on a video card, that will also increase the physical size of the cards until higher density memory chips are released.


By saratoga on 2/17/2007 1:16:58 AM , Rating: 2
quote:
What is the point of salivating over this card if the power consumption is greater then it's overall performance? I mean for the amount of power required to run these cards what will they equate to in performance?


How exactly do you compare watts to FPS? They're not the same thing. IE:

"10 watts faster at Quake" doesn't make any sense.


By Axbattler on 2/17/2007 5:56:55 PM , Rating: 2
Card A consumes 200W to deliver 150 FPS in a certain application, under a certain setting.
-> Each watt gets you 0.75 FPS in that scenario.

Card B comsumes 300W to deliver 180 FPS in the same application using the same setting.
-> Each watt gets you 0.6 FPS in this scenario.

Assuming those two cards were, at the time of the test, running at 100% (or most of it), then you can say that Card A is more efficient than Card B, at least under the test circumstances. And under those very circumstances, Card A may be more attractive to some, especially if Card A runs much cooler (as it should). Of course, if one is only interested in more FPS, then Card B is the best choice. And in situations where the game do not hit the 3 digits, people may be more willing to pay the premium in power consumption just over the more efficient choice if it means going from unplayable frame rate to playable frame rate.

[I do not know anything about the X2900's performance or power consumption, so I am not agreeing or disagreeing with the quoted statement]


By brute1248 on 2/17/2007 7:23:43 AM , Rating: 4
That "review" has come under extreme criticism for many errors in the article that casts a shadow on its validity.

-------
Chillin


By Tyler 86 on 2/21/2007 11:16:09 AM , Rating: 2
TSSAA: Transparency Super-Sample Anti-Aliasing
AAA: Adaptive Anti-Aliasing

Both aspects deal with improving the image quality of textures with transparent components, to make fencing and railing seem more realistic (less blocky, sharper edges, but smoother contours, less moire effect, and undistorted at distances)...


By Enoch2001 on 2/17/2007 11:07:10 AM , Rating: 2
quote:
http://level505.com/2006/12/30/the-full-ati-r600-t...

I don't mind waiting a bit longer for this card.


Laughable - even at 26 fps, ALL of those cards in that questionable "benchmark" perform like arse. When I can do 60 fps or higher at 1600 x 1200 and maxed out fx, then and only then will any video card be worth it..


"Let's face it, we're not changing the world. We're building a product that helps people buy more crap - and watch porn." -- Seagate CEO Bill Watkins











botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki