Print 78 comment(s) - last by Proteusza.. on May 14 at 9:05 AM

My personal thoughts on next generation Radeon

Well unless you've been living under a rock since Monday, you've probably noticed we have access to a few R600 cards

One thing that concerns me is that many of our readers may think the 750 MHz core clock on the Radeon HD 2900 XTX, if it even comes to market, will be the final clock.  I can almost guarantee you that if anyone decides to bring this to channel, the core clock will get a bump.  Keep in mind that Sven managed to overclock the core to 845 MHz on the XT card.

In addition, I was quite pleased with the HD 2900 XT tests.  I do believe when overclocked -- and this card seems to have a bit of headroom -- the card does well against the 8800 GTS and starts to encroach on the 8800 GTX territory.  In our testing, it doesn't beat a GTX, but then again it won't be a $500 graphics card either.  That task was reserved for the XTX and I think it’s quite obvious that increasing the memory size and frequencies on the R600 GPU won't usurp the 8800 GTX.

I won't pretend that the tests we did were exhaustive.  Since DailyTech does not sign embargos, we sort of get what we can take when it comes to publishing early tests.  Scott Wasson, Ryszard Sommefeldt and Anand Shimpi are true experts when it comes to GPU performance, and if you have any reservations about our preliminary examinations, certainly wait for their benchmarks. 

In conclusion, what did the HD 2900 XTX benchmarks really show?  I think the most obvious answer is that the difference between 1GB GDDR4 and 512MB GDDR3 is certainly not going to be a viable option for R600.  Like we had mentioned before, this card is at least partially scrapped at this point, save a few OEMs who will be putting it into workstations.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

By EastCoast on 4/26/2007 10:24:16 PM , Rating: 2
I understand you guys like to be thorough but are you sure that is the final retail version? I just can't seem to get out of my head how those results look like an OC XT and not a XTX. Come on, spell the beans...

By KristopherKubicki on 4/26/2007 10:40:42 PM , Rating: 2
The XTX and the XT are identical except for the footprint and memory type. It's likely the core clock will change as well, but the core is physically the same.

By Snowy on 4/26/2007 10:49:31 PM , Rating: 2
This is kinda off the "retail" part, but do you have to use the 8-pin power connector? I've heard you can just use the 6 pin, then if you want to over-clock, you need to plug in another 6 or 8 pin. Can you confirm this?

By Armorize on 4/26/2007 10:52:14 PM , Rating: 2
I posted this in the other article u meantioned but id like to get something out there.

No one has mentioned crossfire vs sli yet. do you only have 1 of each ati card? I'm just curious if their performance will be differant with crossfire or not. Also if these cards have other neat little packages on the cards that are bogging things down, I read on the INQ that it has hdmi sound capabilities I believe it could be that because these arent the real cards that will be released to the public and only referance cards. Just a thought.

By KristopherKubicki on 4/27/2007 12:35:32 AM , Rating: 6
I was curious about the lack of HDMI as well, though one partner told me on the XT the only HDMI capabilities would be via a dongle (so no sound obviously).

Someone "reached out and touched us" today and gave us 2 Radeon XT cards for use in Crossfire and a different driver (350MB ?!?!). Expect benchmarks this week.

By CyborgTMT on 4/27/2007 1:56:04 AM , Rating: 2
If you can get your hands on that XTX again, I'd be interested in seeing another round of benches with the 'new' driver.

By nkumar2 on 4/27/2007 3:06:25 AM , Rating: 2
as a reader i really like to know the best performance for future references. is there no way u guys could actually get some numbers for these cards under crysis, i would love to know how it holds up in crysis. i dont know i would assume that someone should have the darn demo to play with.

By CyborgTMT on 4/27/2007 5:15:36 AM , Rating: 2
Crysis demo won't be released until at least June, with a limited number beta keys given out (20,000). There are claims of a 'leaked' version floating around the net, but since I have not looked for it, I don't know if that is true or a rumor. Either way it would be an early build not suitable for testing and very unethical to use.

By jazkat on 4/30/2007 8:57:56 AM , Rating: 1
the r600 is built for future games not dx9, in dx10 the 8800 gtx will get a trouncing because the xtx will be 40% faster :) and im only updating for dx10 if i want to play dx9 games il buy a dx9 card now for those who says they get the 8800gtx for both will come unstuck becuase in dx10 the 8800gtx wont be performing as you all think, heres some numbers

The Radeon HD 2000 series supports Superscalar marchitecture and it means that with Vec5 or Superscalar, ATI can process 5 scalar instructions per clock. It has 64 Unified Shaders and by multiplying these two numbers you end up with the amazing number of 320 Stream processors.

The Nvidia G80 GTX can only handle Vec4 instruction in a scalable way, or four independent instructions, and it looks like ATI might have an advantage here. Nvidia can process 128 Stream processor instructions, but at the much higher clock speed of 1.35 GHz, while the R600 can do 320 Stream instructions per clock, but at close to half the speed or 750 MHz.

In raw numbers, the G80 at GTX clocks can handle 172.800 Millions of instructions or Shaders if you like, while the R600 can handle 240.000 Million Shaders per clock. If this turns to be right, ATI could run Shader instructions - especially the unified ones - up to forty per cent faster.

this will only be noticeable in next generation games and benchmarks,

ahhahaha wont be long till we see the dx10 3dmark so when can all witness the gtx gettin a total hammering.

By Proteusza on 5/14/2007 9:05:18 AM , Rating: 2
Supporting lots of shaders is all well and good, but lets face it, current titles do use pixel shaders in fairly large amounts, and can require lots of pixel pushing power and a high memory bandwidth. With its 512 bit bus, the card has the highest memory bandwidth ever seen, the most shading power ever seen, and not a bad fillrate, yet it still isnt great. I think its the fillrate that kills it, that and drivers that dont fully utilize its VLIW architecture yet.

Any future games will still rely on fillrate to some degree. Yes shaders will get more intensive, and maybe then the card will do better. But it seems that it just doesnt have the fill rate to compete with the 8800 cards.

Someone mentioned somwewhere that what nvidia did right is the balancing of the 8800 series. Not too heavy in any one direction - good shader performance for todays and tomorrows games, good fillrate, good memory bandwidth. ATI got the balance wrong and it looks like they are suffering for it.

Why buy a card built for tomorrows games when you can (or could have bought one 6 months ago even) buy one that performs well in todays and tomorrows games?

By Zoomer on 4/27/2007 8:03:10 PM , Rating: 2
That sounds suspiciously like a large, reputable company that also manufactures top-end motherboards, laptops, etc. It was the same for the x1950 series.

Don't post the driver set size!

By AlabamaMan on 4/28/2007 10:18:00 PM , Rating: 2
So Kristopher, are you still on for those tests? The week is almost over.

why i think dt didnt have the latest revision
By nkumar2 on 4/27/2007 12:11:29 AM , Rating: 2
why i think that they didnt have the final revision is this, the card is clocked at 745 and memory at 2040 or something, but the final specs are 800core and 2200 for the memory so dailytech doesnt have the final retail card, thats why i think they their card revision is not final either. shoot call me a fanboy but i flip sides every 6 months whoever is faster.

By KristopherKubicki on 4/27/2007 12:32:49 AM , Rating: 4
That is certainly a good point that we brought up in another post - the card can overclock to 845 fairly stable. It would be ludicris to think ATI would ever release the XTX with the clocks that its at right now.

That being said, is a 50MHz bump going to do it? I dont think so, and neither does the majority of OEM partners that have turned down this card.

RE: why i think dt didnt have the latest revision
By johnsonx on 4/27/2007 3:11:35 AM , Rating: 4
.....the majority of OEM partners that have turned down this card.

I think that's the real issue here. If many OEMs are saying that the XTX isn't worth their bother, then it's quite clear no amount of tweaking, OC'ing, final versions, driver optimizations, etc. are going to bring the XTX close to competing with the 880GTX.

It appears to me that the ONLY way the XTX could be marketable now is if they cut the memory size down to 512MB (to lower the cost), and boost the clocks as high as possible. If they can split the price difference between the 8800GTS 640 and the 8800GTX ($450 or so), then they might have something. Sure they won't have the top performing card this time around, but at $450 they'll outperform all the GTS's and nicely undercut the GTX's they can't match. Then the regular XT could split the prices between the 320Mb and 640Mb GTS's, and outperform both. The perfect final piece would be a HD2900Pro with slightly lower clocks and 256Mb RAM, priced between the 8600GTS and 320MB 8800GTS.

In the above scenario then, NVidia would get the fastest video card claim, while AMD would enjoy victory at every price point below $500.

AMD could coin a new variation on the old phrase: If you can't beat 'em, then go between 'em!

RE: why i think dt didnt have the latest revision
By FITCamaro on 4/27/2007 11:43:13 AM , Rating: 2
Correct me if I'm wrong but with a 512-bit memory bus, doesn't the card need at least 512MB of RAM on it? I've never seen a card have a certain size memory bus in bits and have less memory than that in MB. I assumed it was because you need that much memory or the card will run out of memory due to the bus being able to provide far more data than there is room for.

By johnsonx on 4/27/2007 1:19:08 PM , Rating: 2
No, I don't think that has to be true. It depends on the density and layout of the memory chips.

That said though, I'd imagine the mythical 2900Pro I proposed would have a 256-bit memory bus. Just lowering the clocks a little and reducing the amount of memory wouldn't be enough to make it a proper $200 mid-range part, either from a performance or a manufacturing cost standpoint.

RE: why i think dt didnt have the latest revision
By Martimus on 4/27/2007 2:12:27 PM , Rating: 2
I had a 128 bit video card from Tseng labs (the ET6000 VPU, PCI Interface, 128-bit, 4 MB (MDRAM)) and it only had 4MB of RAM. Having 1MB per 1 bit memory bus isn't a need, it is really just an arbitrary number.

By johnsonx on 4/27/2007 2:58:32 PM , Rating: 3
It's been a long while since I had my hands on an ET6000, but I doubt the 128-bit spec referred to the external memory bus width. There was a time when various vendors claimed all kinds of large 'bitness' that had little basis in fact. I could be wrong of course, but I don't think so.

But I do agree there's no need to have as many megabytes as there are bits in the interface; you just need the right number of chips with the right bit width.

By svenkesd on 4/29/2007 6:57:34 PM , Rating: 3
512 bit memory bus means it can read a 512 bit line of memory at one time. 512MB of RAM is the total amount of memory (which can be accessed 512 bits at a time).

something different in tomshardware forums
By nkumar2 on 4/27/2007 1:06:34 AM , Rating: 2
someone at tomshardware forums has very conflicting hints to give the readers, they say that the long card they have shouldnt have been the one they reviewed so i dont know what they mean by that. well here is the link read the whole page and find out for urself.

By KristopherKubicki on 4/27/2007 2:26:21 AM , Rating: 2
Well, someone bitter there seems to think the long card was shipped to Derek and then Derek gave it to DailyTech. I know they are our competitors, and it would be beyond Omid to admit that DailyTech and AnandTech are separate entities, but the constant whine fest is getting a little tired. If they are upset that DailyTech gets products before Anand and Tom, then they shoudl stop signing NDAs.

And yes, the long card shouldn't have been reviewed because it doesn't look like its coming to market. But now, everybody knows why.

RE: something different in tomshardware forums
By CyborgTMT on 4/27/2007 5:43:05 AM , Rating: 5
Step 1: Grab an 8800 and spray paint it red
Step 2: Slip a few sleeping pills in Derek's coffee
Step 3: Wait for him to pass out and switch cards
Step 4: Run back across the hall and give us the retail benches
Step 5 (optional): Replace retail XTX before Derek wakes up, or keep it - he won't notice the difference anyway.

By KristopherKubicki on 4/27/2007 6:01:51 PM , Rating: 4
Well Derek works in Raleigh, and the DailyTech office is in Chicago. Those are going to have to be some brutal sleeping pills :-P

By crystal clear on 4/28/2007 6:31:23 AM , Rating: 2
BestofMedia Group and TG Publishing Merger Creates Tech Publishing Powerhouse

Combined company catapulted into top three slot in online tech media

PUTEAUX, France, April 16 /PRNewswire/ -- BestofMedia Group
( and TG Publishing ( today
announced a merger to form one of the top-three online media publishing
companies for technology in the world and one of the top-two in
pan-European rankings. The company's competitors include CNET Networks,
Inc. and IDG (International Data Group).

Conclusions-The bigger they get to be-the worse their quality of contents become.
Small but efficient like D.T. is preferable !

How do ratings work?
By HurleyBird on 4/27/2007 3:30:17 PM , Rating: 1
How do ratings work? Are the DT people the ones who rate posts? If so it seems like there's going to be a huge conflict of interest rating posts that are critical of DT.

RE: How do ratings work?
By Le Québécois on 4/27/2007 5:36:41 PM , Rating: 2
Simple(at least the way I understand it): To rate a comment you have to be a registered user. If you have many comments or a high comments rating, you get more vote than a normal user...or is it, if you don't have enough comments or a too low rating, you get less vote. The minute you reply to a comment or post a new comment you forfeit your "power to rate" on the news you commented.

So basically you get rated down or up by DT readers. Sure Kristopher and the others editors can vote but they probably don't weight much against all DT readers.

RE: How do ratings work?
By KristopherKubicki on 4/27/2007 5:55:04 PM , Rating: 2
LQ is correct. It's all user based. Myself (and only I) have the ability to moderate comments -- that is remove a comment -- or "rate 6" a comment. When you see those green comments that you can't mod down, that's a "rate 6" comment.

RE: How do ratings work?
By HurleyBird on 4/27/2007 10:50:16 PM , Rating: 2
For some reason I can't find the controls to rate comments... strange.

RE: How do ratings work?
By Le Québécois on 4/27/2007 11:56:05 PM , Rating: 2
Like I said, if you have posted a comment in 1 news you can't rate anyone commenting that news.

If you still can't rate comments maybe it's because your rating is too low and you don't have posted enough comments to compensate.

RE: How do ratings work?
By johnsonx on 4/29/2007 2:37:24 PM , Rating: 2
You also have to have a minimum number of posts yourself before you can rate other's posts. I think you also have to have a decent post rating; in other words, if you post plenty but all your posts get rated down to 0 or -1, you won't be able to vote either.

The secret then? Babble a lot, and don't piss everyone off every time.

it will own in dx10.
By nkumar2 on 4/27/2007 1:51:49 AM , Rating: 2
ok guys i aint a fan boy i have said it about three times now, but this card has more complex shaders made vastly to perform under dx10, where 8800 series has simple shaders, ati took a totally diferent approach and dailytech failed to mention that, that is why all of us need to stop bashing each other and dailytech, dailytech has said it this is not an extensive test. someone please test this card with dx10 so we know which architecture is more effiecient under dx10.

RE: it will own in dx10.
By oddity21 on 4/27/2007 7:29:29 AM , Rating: 2
Since there are no real DirectX 10 products on the horizon, it will probably be a bit tough to make this advantage - if it exists at all - a worthy marketing bullet point.

Remember, 2007 DX10 tiles are DX9 games with some SM4.0 effects or DX10-specific performance optimizations thrown in.

RE: it will own in dx10.
By FITCamaro on 4/27/2007 11:50:23 AM , Rating: 2
Good point. ATI designed this card to be more of a DX10 card than a DX9 card. As long as it can play the DX9 games of today at a playable rate with all the details, its fine. What will really matter is how well the 8x00 series and the 2x00 series handle DX10 as its what they really should be for.

So if the X2900 series can't beat the 8800 series in DX9 but tromps it in DX10, I'd say thats the better buy. Unfortunately we won't know until DX10 games are in full swing. And that won't be for at least another year.

RE: it will own in dx10.
By GlassHouse69 on 5/1/2007 12:17:16 PM , Rating: 1
Yeah, that is how im rolling with this wave. I am going to get the ATI r600 in the 65nm version (if there are other versions than that) and just say, "it will get me more frames than my monitor can refresh" and just assume that it will kick butt when it comes to more complex games like.... hellgate london... moan......

anyways, it wont be expensive and Avivo is a nice thing to have. using it now, its clearly the best dx9 gfx card dvd viewing machine.

RE: it will own in dx10.
By Martimus on 5/3/2007 4:02:04 PM , Rating: 2
The complex shaders that get great performance increases through high clock speeds remind me of the Netburst architecture. Here we have a 5 stage shader verses a single stage shader (I think) for nVidia. Intel had a 20 stage pipeline in early P4's, compaired to a 6 stage pipeline for the Athlon. The extra stages allowed the chip to be clocked much higher, but it was far less efficient. I wonder if this architecture is similar to that where it can be clocked much higher, but is far less efficient than nVidia's design. I am just guessing here, because the things I am reading about the differences between these GPU's look an awful lot like the things I read about the differences between Netburst and Athlon designs.

By Snowy on 4/26/2007 9:55:33 PM , Rating: 2
Perhaps ATI's "refresh" will be the HD 2900XTX?
This seems odd though... 1 gig of GDDR4 clocked a 1.1ghz isn't making any improvement over the 512mb of GDDR3, unlike the 5-10% performance increase that we saw when the X1950XTX came out.

RE: Hmm...
By EastCoast on 4/26/2007 10:22:26 PM , Rating: 2
I thought that was core clock difference and not GDDR3 to GDDR4 difference.

RE: Hmm...
By FITCamaro on 4/27/2007 11:45:11 AM , Rating: 2
X1950XTX and X1900XTX have the same core clock.

PSU used.?
By KTB on 4/27/2007 3:30:45 AM , Rating: 2
Hi Kristopher,

Can you please let us know which PSU that was being used with the R600's (w/8-pin PCIe)?

I was guessing a Thermaltake Toughpower, but another look at your shot tells me it's not a PCP&C, Corsair, Hiper, CoolerMaster, SeaSonic, SilverStone, FSP or OCZ...

Most likely a Topower based PSU I'm thinking... was it a Tagan or a Mushkin XP-650AP?

BTW, AFAIK the final release retail card drivers will not be disclosed but after 10th May, and possibly another one to improve further towards the 26th. The one before that is a RC right now, and clocks I had heard from bodies away from online a month back were 845/2200 (core/memory) for the final retail R600XTX.

Additionally also hearing back then that at higher detail settings, it hammers the XFX 8800GTX XXX, but levels out at lower resolutions and detail.

That's as far as I know from authoritative sources. ;-)

RE: PSU used.?
By oopyseohs on 4/27/2007 10:23:50 AM , Rating: 2
To clarify:

There are two power connectors on the top of the R600. One is a normal 6-pin PCI-e connector, and another is an 8-pin. All 8-pins are not required for normal operation - the extra 2-pins are there to add more stability and power in "Overdrive" mode. The 1000W Galaxy from Enermax also has these 8-pins. Indeed, the card works just fine and overclocks quite well still with just two normal PCI-e connectors attached.

The unit in use in the picture is a 1100W unit from Tagan.

MSDN Blog Ptaylor on FSX and DX10 interesting
By stance on 5/1/2007 4:05:36 AM , Rating: 2
However, as FSX Launch occurred Oct 17 and we began to get feedback from the community we realized we needed a service pack to address performance issues and a few other issues. So we started a dialog with the community and made a commitment to delivering a service release. The work on SP1 and DX10 is being performed by the same team of people (the Graphics and Terrain team) and thus delivering SP1 has delayed DX10.

Given the state of the NV drivers for the G80 and that ATI hasn’t released their hw yet; it’s hard to see how this is really a bad plan. We really want to see final ATI hw and production quality NV and ATI drivers before we ship our DX10 support. Early tests on ATI hw show their geometry shader unit is much more performant than the GS unit on the NV hw . That could influence our feature plan.

By mars777 on 5/8/2007 11:04:49 AM , Rating: 2
The 2900XT has Vect5 operations so it's unified shaders when used as geometry shaders should be faster than the simple ones from nVidia, just because phisics/collisions/deformations vastly use vect5 code that can run on a single pass there.
Sorry for my bad english, hope you get it.

By Captain Orgazmo on 4/26/2007 11:48:22 PM , Rating: 3
I thought the whole point of switching to new GDDR4 memory was to increase clocks and bandwidth, but the clock speeds are no different compared to GDDR3. I guess, however, that the core limits whatever advantages the GDDR4 delivers, and I don't know this for sure but typically newer memory types have higher latencies at first (like switching to DDR2 from DDR at first, but now I have Supertalent 800MHz DDR2 running at CL3).

Regardless of the performance of the XTX, I am hoping that the release of the 2900XT drives down prices, because it's been several months and card prices haven't moved noticeably. Not everyone can justify spending over $500 on a video card when consoles sell for less, and that's a whole system. Also, if the XT can outperform the GTS for the same money, it is a good buy (assuming ATI drivers work in Vista).

too much optimization for 3dmark and not games?
By nkumar2 on 4/27/2007 12:02:23 AM , Rating: 2
honestly i have read at they posted a news coloumn a few weeks ago where they said that the x2900xt was beating the 8800gtx, and fudzilla stated earlier this week that the newest revision of this card is indeed faster and its clocked at 2200 for the memory.
i believe that if it holds up in 3dmark against the ultra than there is only a few months before we have drivers that optimize these cards further for dx9 games. i am not a fan boy i just keep up with whats the best.
i dont think they had the latest revision. and yea some of you are right with that much memory and speed it should be atleast 10% faster no matter what the case is. so this revision did have some kinks in it.

i think ati has been optimizing too much for 3dmark06 where they need to get off their butts and optimize for the real world gaming. the oced 2900xt scores 14000 in 3dmark06 which is insane. if it can hold its own against the 8800 in 3dmark than it should do the same in the games. thats what ati has been doing trying to beat the compition in 3dmark not real games.

By GlassHouse69 on 5/1/2007 12:19:28 PM , Rating: 1
the ultra will be 300 dollars more than this card according to msrp release info.

the ultra is a gtx pushed even higher, aka, a toaster. eh. no point really until nvidia goes 65nm with that. then, that will be the killer probably.

ATI better off without AMD?
By Eurasianman on 4/26/07, Rating: 0
RE: ATI better off without AMD?
By noirsoft on 4/26/2007 8:56:41 PM , Rating: 2

C'mon 7900GS!!! Keep a chuggin'!... in Vista??? ROFLMFAO!!!

I'm not sure what you are getting at. Vista handles DX9 cards just fine. There may be a slight performance decrease due to driver writers needing time to get up to the new architecture, but it's hardly laugh-worthy.

By strider1984 on 4/27/2007 3:38:53 AM , Rating: 2
Is it possible to make some benches on a resolution of 2560x1600 with 16xAA on both the 8800GTX and the HD2900XT? It would be at least be very interesting ;)

it has to be
By AntDX316 on 4/27/2007 4:33:53 AM , Rating: 2
ROPs has to be the problem cause the 8800GTS has 20 8800GTX has 24 and the R600 has only 16 = 2900XTX and XT only have 16 and they both have very similar results from the benchmarks

Does Higher Clocks Win the Day?
By EastCoast on 4/27/2007 9:53:41 AM , Rating: 2
If ATI is increase core/mem clocks to beat Nvidia's 8800 series then I see a problem. 8800 series is getting great performance at lower core/mem clocks while HD series takes higher core/mem clocks (from the benches produced so far). What this mean (if retail versions are true) is that the 8800 is more efficient at producing better performance then HD series. What this translates into is less raw power needed with more performance.

Was the test done right?
By Mr Anderson on 4/27/2007 10:39:19 AM , Rating: 2
I heard that the 2900's are capable of 24xAA or AF, I'm not sure which one, while the 8800's are only capable of 16xAA or AF, again not sure which one. Could it be that the causes were tested with these setting? So it would be like this...

2900XTX - max detail - 24xAA 8xAF
8800GTX - max detail - 16xAA 8xAF

If this is the cause I could see why such a performance difference.

Native crossfire?
By 457R4LDR34DKN07 on 4/28/2007 8:44:09 PM , Rating: 2
My question is if you no longer need a dongle for crossfire, would it be possible to run crossfire on a non crossfire platform such as a sli board?

Does it really matter?
By stangman on 5/5/2007 7:47:43 AM , Rating: 2
Does any of this really matter? It is not smart to draw conclusions from any single source (especially a "preview") about the performance of an unreleased product. The simple fact is some of us will go out and buy an HD 2900XT/XTX and not think twice about it, regardless if we own an 8800 or not. Those among us who really care about which product is superior is actually going to take the time to do real world comparisons of their own and draw conclusions based on their own usage.
Anyone who is just curious about which is faster so they can post their little piece in the "what video card should I buy" posts in various forums should at least wait until there is a large pool of reviews to draw your conclusions upon. Most results will vary in performance numbers due to different variables such as game settings used, driver settings used, complete system specs, drivers, operating system, background aps, and so trusting a single source (regardless of how much you support the site) is just stupid anyway. A smart consumer draws information from as many sources as possible before making his decision.

The card will sell once it gets to market, if its a little bit faster, a little bit slower, better for some games and not others, it really doesn't matter. You put the choice out there and people will take it.
And yet again, why bother bringing up the DX10 performance? Do you REALLY think a DX10 exclusive game is just around the corner, considering so few compliant systems available? When you release a game you need to target the broadest audience possible, and right now that is not the DX10 market. By the time we have more than a handful of DX10 games the 8800 and HD2900 will be replaced with newer series anyway. If your looking to buy an HD2900 or 8800 today, the sole reason should be because they will perform better at high resolutions and high detail settings than the x1950xtx or 7900GTX/7950GX2 and thats all there is too it. Buying a $500 card to play 1280x1024 is a waste of money, and with a bleak DX10 future for these specific cards the only logical thing to do is slap one in with your 1920x1200 display, or slap a pair in for 2560x1600, otherwise your not going to get your moneys worth or use the card to is full potential.

And about comparing a "stock" card to an overclocked card, its really a non issue. If you can afford $500 for an 8800GTX, you may as well toss in another $20 for an overclocked version. Most of the 8800 gamers I know have their cards clocked at 620+ core and usually 1800+ memory,, being that most companies offer both reference and overclocked versions of their products, the overclocked versions sell just as well. Consider the possibility that most 8800GTX owners own a factory overclocked model or overclock it them selves, would they be interested in seeing how a new product fairs against a reference card slower than their own? Or against an overclocked model similar to their own? Not that I take that as the intention of these previews, but its not hard to think for a moment before hitting the "post comment" button.

If it out performs the x1950xtx then its a good step for AMD, if it is in the same league as the 8800 series then it will be an attractive product with the right price. If its faster than the 8800 series then it'll prolly sell even better. Don't think for a second that the money invested in product design, testing, production, and marketing will go to waste...

By scrapsma54 on 4/27/2007 10:28:36 AM , Rating: 1

Too little too late
By HurleyBird on 4/27/07, Rating: -1
RE: Too little too late
By KristopherKubicki on 4/27/2007 4:16:58 AM , Rating: 2
Where did you read it was a watercooled card? It wasn't.

RE: Too little too late
By HurleyBird on 4/27/07, Rating: 0
RE: Too little too late
By HurleyBird on 4/27/07, Rating: 0
RE: Too little too late
By HurleyBird on 4/27/07, Rating: 0
RE: Too little too late
By Axbattler on 4/27/2007 8:37:27 AM , Rating: 4
The criticism is valid, but it doesn't really change the outcome really. If the XTX was about 10% faster than the stock 8800GTX, and we see this 'near Ultra card' nudge past it thanks to the overclock, then I would shout deception too. As it is, the numbers gives me enough for a first look. I can wait AT to do a full review. Plus - they could've very well -not- mention about the OC-ed card.

RE: Too little too late
By HurleyBird on 4/27/07, Rating: 0
RE: Too little too late
By smut on 4/27/2007 1:12:04 PM , Rating: 2
I clock a GTS at 650 on air cooling. I know the GTX can reach 650 because a poster said that he has 2 GTX cards running in SLI at 650+ on air cooling.

Regardless, can you post these links showing "ATI guys" and "NDA guys" showing proof that they are saying these are crap? I'd like to see the info as it sounds interesting and its always nice to see proof when people claim things.

And mistakes in a PREVIEW, it wasnt a review, has no bearing on whether its true or false so why even bring that up?

RE: Too little too late
By rbuszka on 4/27/2007 1:46:10 PM , Rating: 1
It's important because Dailytech is making news items about the demise of the R600 as a serious performance contender to the 8800. If the numbers aren't valid due to the methodology employed in testing (or some other reason), then the performance claims DT is making are faulty.

And if the claims are false, then it isn't news.

(DT: don't kiss your credibility goodbye.)

RE: Too little too late
By HurleyBird on 4/27/07, Rating: -1
RE: Too little too late
By johnsonx on 4/27/2007 1:38:16 PM , Rating: 3
But if the XTX really does perform well, then why are the OEM's passing on the XTX? Or is KK lying about that too?

It may well be that some of DT's numbers are a little low for various reasons. However it still seems pretty clear that the XTX won't be close enough to the 8800GTX to be marketable at the required price point (the OEM's know EXACTLY how these cards will perform). The OEMs likely would rather offer regular and OC versions of the XT at GTS price points where they can make money, rather than fight a losing battle with the XTX at the GTX price point.

RE: Too little too late
By HurleyBird on 4/27/07, Rating: 0
RE: Too little too late
By IntelGirl on 4/27/2007 9:47:50 PM , Rating: 1
OMG you pathetic ATI fanboys. Can't you just admit that the 8800 crushed the 2900 and let it be?

Nothing you say or do will change that fact. You can argue and blame this and that, but in the end, the 8800 will still be superior to the 2900.

Just let it go. There are better things in life to do, go enjoy it.

RE: Too little too late
By HurleyBird on 4/27/2007 10:53:56 PM , Rating: 2
Someone with the screen name "IntelGirl" calling people fanboys, too funny.

In any case, I think the most sane thing to do given the widespread criticisms of the DT previews is to wait until the NDAs life.

RE: Too little too late
By 457R4LDR34DKN07 on 4/27/07, Rating: -1
RE: Too little too late
By nkumar2 on 4/28/2007 1:35:49 AM , Rating: 2
ok i am not a fanboy but i do hate when people comment on how the r600 lineup is a trash just because it couldnt beat the 8800 in todays games, well i wouldnt be surprised if it catheces up in a few months with tweak drivers. if u look at the architecture of the damn chip you would know, it is made for dx10, ati is not dumb it is there second generation unified shader chip, they are not just gonna f*ck it up, i do have an 8800 series and i aint a fan boy, and i do love intel, amd and nvidia cuz i love compitition,

go search the internet about this review, every one out there even the people at pcperspective said in the forum on their website is something like this," the numbers at dailytech doesn't show what the hardware is all about. so there you have and he was at a r600 briefing from amd.

this card is the future, and i dont know why people say dx games wont be coming for two years, well crysis will be out this year, call of jouarez, new company of heroes for dx10 thats a whole new game, lost planet for pc which is dx10, we can not judge this card from dx9 games thats all i have to say. well i think ati wanted to let this card out when dx10 game was out, but crysis has been delayed and delayed and now they are saying 3rd quarter for sure. but who knows.
i want to see the dx10 battle we are talking about next gen people, who wants to see 120 frames for fear something our eyes cant even see, shoot even the x1950xtx in my other computer plays almost everything at 1920x1200 with 50 frames in every game, i want a future proof card doesnt matter who comes out on top, whether it be nvidia or ati i will get either one, just watch the reviews there is more to the card than just dx9 numbers.

RE: Too little too late
By nkumar2 on 4/28/2007 1:59:42 AM , Rating: 2
oh yeah forgot to mention alan wake, ut3, and more u can search on line these games are going to be out in 6-8 months.

RE: Too little too late
By togaman5000 on 4/29/2007 8:26:33 PM , Rating: 2
By the time the DX10 games that truly show us what DX10 can do come out, both companies are probably going to have new high,mid,and low end parts out, so the only real thing that matters to these particular cards is their DX9 performance.

If you can find a DX10 game that a lot of people play and can compare their performance to, go ahead and test it. Of course, nothing like that exists, so what does it matter that they were meant to be DX10? Cards that come out in a time where only DX9 games exist, and a time where it'll remain like that for at least half a year, are DX9 cards.

RE: Too little too late
By jazkat on 5/2/2007 12:23:07 PM , Rating: 2
thats the thing if i want a dx9 card ill get a dx9, but amd was working with microsoft with r600, whats the point in dx9 benchmarks with dx10 cards dailytech may of lost ati money now because of there stupid benchmarks. its b***s**t as amd partners were only given 12.5" gddr3 xtx samples so what makes u think dt had a card that wasnt available??

also the r600 will smash the gtx in dx10 simple as that

RE: Too little too late
By HurleyBird on 4/27/2007 10:52:04 PM , Rating: 2
I'm still waiting to hear what card you used as the only card with those specs is water cooled. Do you have an unreleased 8800?

RE: Too little too late
By KristopherKubicki on 4/28/2007 6:48:01 AM , Rating: 2
I'm still waiting to hear what card you used as the only card with those specs is water cooled. Do you have an unreleased 8800?

As I said in the post that you are replying to, it's not watercooled.

RE: Too little too late
By HurleyBird on 4/29/2007 3:56:00 AM , Rating: 1
That's not the question. I beleive you that the card isn't water cooled. However, since *no other* card has those clocks I'm wondering if you used an unreleased card to test against the 8800, or if there is just a mistake in the clocks that you reported.

RE: Too little too late
By Chillin1248 on 4/29/2007 4:50:30 AM , Rating: 3
No other card that you know about, however there are many obscure names that you probably don't recognize that are Nvidia OEM's.

Point of View

Just to name a few. So while I don't claim to know the exact vender overclocked card that they use, I also don't doubt it without more knowledge about the card.


"I mean, if you wanna break down someone's door, why don't you start with AT&T, for God sakes? They make your amazing phone unusable as a phone!" -- Jon Stewart on Apple and the iPhone
Related Articles

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki