backtop


Print 58 comment(s) - last by AntDX316.. on May 18 at 4:10 AM

2.0 GHz memory frequencies? No problem.

Anh beat me to R600 benchmarks by a few mere hours -- when you snooze, you lose at DailyTech. Needless to say, I feel somewhat compelled to add my benchmarks to the mix as well. 

The system I'm using is an Intel Core 2 Extreme QX6800, ASUS P5N32SLI-E with 2x2GB DDR2-800.  My tests this morning used the Catalyst 8.361 RC4 driver.  The card used was a Radeon HD 2900 XT 512MB.

Core Clock
Memory Clock
3DMark06
1280x1024
3DMark06
1600x1200
745 MHz
800 MHz
13547
10978
800 MHz
900 MHz
13855
11164
845 MHz
950 MHz
13908
11219
845 MHz
995 MHz
14005
11266

Like Anh, I was able to get pretty close to a 2.0GHz memory clock while still keeping the system stable.  For reference, my GeForce 8800 GTX (core clock at 650 MHz, memory at 2.0 GHz) scores 14128 (1280x1024) and 11867 (1600x1200) on the same system with ForceWare 158.19. 

I'm currently benchmarking the Radeon HD 2900 XTX, though I'll revist the XT if anyone has any particular requests.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

so...
By yacoub on 4/24/2007 11:38:27 PM , Rating: 1
So if a 8800GTX with its much lower clockspeed can outperform even that most highly-clocked overclock of the 2900 XT, I'd much rather take the quieter, cooler running 8800. Especially after the R600's competition helps drive the prices down a bit.




RE: so...
By CyborgTMT on 4/25/2007 12:12:39 AM , Rating: 2
So if an overclocked 8800GTX can out perform a reference design OC'd 2900xt, that will cost $100-150 less, by a mere 123 points and 401 points respectively on beta drivers; will anyone act like a fanboy.... guess so.

BTW, please inform us of how you know that the 2900's are louder and hotter?

The only thing I agree with is the hope this drops prices so I can upgrade my SLI setup.


RE: so...
By yacoub on 4/25/2007 7:24:50 AM , Rating: 1
I'm sorry your fanboyism got in the way of you understanding what I wrote. I'll try again:

Quote:
"overclocked 2900XT:
845 MHz
995 MHz
14005
11266

overclocked GeForce 8800 GTX (core clock at 650 MHz, memory at 2.0 GHz) scores 14128 (1280x1024) and 11867 (1600x1200) on the same system with ForceWare 158.19. "

And I said in response to those numbers: So a GTX that is clocked lower (650MHz is a lower core clock than 845MHz right?) will run cooler yet perform better. I'd rather have the card that is clocked lower and will last longer and not be as likely to artifact and also uses a quieter cooler.

I'm not sure how any of that is being a fanboy on my part, it's simply a logical statement, but I like that in your fanboy rage you immediately try to label the other person a fanboy for simply posting a logical position. Grow up.


RE: so...
By CrystalBay on 4/25/2007 7:43:11 AM , Rating: 2
why don't you stay in nvidia Yankoff...


RE: so...
By DingieM on 4/25/2007 8:04:40 AM , Rating: 2
Wake up.

The shaders run at almost double speed of main clock in G80 generation so approximately 1.35 GHz.
This is what the G80 gives its speed.

All components inside R6xx runs on main clock.

Conclusion: one cannot compare those two generations of cards on the sole main clock frequency.

Wake up.


RE: so...
By Sunday Ironfoot on 4/25/2007 9:04:36 AM , Rating: 1
quote:
And I said in response to those numbers: So a GTX that is clocked lower (650MHz is a lower core clock than 845MHz right?) will run cooler yet perform better. I'd rather have the card that is clocked lower and will last longer and not be as likely to artifact and also uses a quieter cooler.


The 8800 is manufactured at .90nm whereas the new 2900XT/XTX cards are 'rumoured to be manufactured at .65nm, thus they will be much smaller, Smaller = less power consumption = less heat = less noise.

Assuming the rumours are true the 2900 series will run cooler than the 8800 series.

Also the 2900XT is suppose to be compared to the 8800GTS not the GTX, as thats the price point the 2900XT is aiming for. Wait for XTX benchmarks and see how they compare with 8800 GTX.


RE: so...
By Lakku on 4/25/2007 11:44:16 AM , Rating: 1
Exactly, everything you just said is based on rumor . It's a fact the 8800gtx draws about 175 to 185 watts at load. With all the rampant speculation on almost every website talking about the R600 and its power requirements, how can every ATi fanboy ASSUME everything will change next month? This could be the reason for the delay but no one has disputed, from ATi or otherwise, the documentation FROM ATi stating the R600 will use 240 to 270 watts (XTX version as far as I know, because if the XT draws this much, you are kidding yourself if you think it will do well in the OEM sector). I don't care how you spin it, that means it WILL run hotter then an 8800 should it actually use that much power. But since we are talking about rumor here, places are also saying the XTX won't come out until Q3. It doesn't matter if that source is reliable or not, since you people seem to believe everything good you hear but require proof when you hear bad. So if that turns out to be true, that puts ATi about 9 months or more behind nVidia, and by then, DX10 game(s) WILL be out. If it is delayed, you can bet the farm it was delayed due to power issues, or them waiting to get it working on 65nm because it was a power hog. I think everyone should just wait and see, because as it stands, these numbers are NOT impressive, especially if the card is going to be drawing that much power, and if it can't muster more out of a frikkin 512-bit bus with "320" stream shaders. The only saving grace is if the rumors turn out to be true that the XT version will be under 400 MSRP, because then it may just be worth it.


RE: so...
By Goty on 4/25/2007 1:28:13 PM , Rating: 2
First of all, consumed power is not directly related to heat output. You have to take efficiency into consideration, first. A chip that consumed 1000W but was 100% efficient would have zero heat output while a chip that consumed 1000W but was only 50% efficient would put out 50W of heat. I agree with your conclusion, but your logic is flawed.

Also, regarding heat output, who cares how hot the card runs? The heat is being exhausted outside the case for the most part, so the temperature of the card has little-to-no bearing on anything.

Regarding the comment about the XTX being delayed by three months, Sven has stated that he has an XTX in-hand, so that's just utter nonsense.


RE: so...
By Schmide on 4/25/2007 2:01:41 PM , Rating: 2
Conversation of energy!!!

Consumed power is directly related to heat output, The chip is basically a very complex variable resistor and there is nowhere else for the energy to go. If it uses it, it must go somewhere. It goes to HEAT!!! Efficiency is directly related to power usage which is directly related to heat output.


RE: so...
By Goty on 4/25/2007 6:04:44 PM , Rating: 1
Exactly, if a chip is 100% efficient, then it will use all the power it is supplied with, leaving none left over to be wasted as heat. Semiconductors are inherently resistive, as you stated, and so some is wasted as heat. The type of semiconductor used, as well as its dopants, all affect it's current characteristics. Some combinations are more efficient than others due to their relative band gaps so they will waste less current as heat than other combinations.

Different manufacturing processes also play a part in efficiency because of the fact that the dielectric used in the transistors gets smaller as the size of the individual transistor goes down. The thinner the dielectric, the more current that is allowed to leak out as heat.


RE: so...
By dunno99 on 4/25/2007 8:30:39 PM , Rating: 3
Don't want to start a flame war or anything, but despite you being able to use a bunch of terms from an EE lecture, you're missing the point about where that energy goes. Remember back in physics, a circuit has to use up all the voltage difference between the source and sink (usually ground)?

Since we know that these graphics cards need some finite amount of current (I) at a certain voltage (V), we know that the power it uses is P = IV. And according to the laws of thermodynamics, energy (and power) is conserved.

Therefore, If P = IV amount of watts is used by the graphics card, where does the energy go? Most likely, the card doesn't have a huge battery or capacitor storing up the energy just to mess with you. So, everything turns into heat in the end (we're not talking about heat death here, just the usage of the video card). Therefore, if a card consumes 1000W of power, it will output 1000W of heat, no matter what. Efficiency just means how much of that power is used to do something "useful."

As a final note, I'll just use an example. Assume you have an adder circuit that absolutely requires 1V at 1A. The power it uses is 1W. Its efficiency we would say, is 100%. However, if you put a huge resistor in front of the adder (in series) that is rated at 1 Ohm, then you would need to raise the voltage input to 2V in order for the adder to do its job. Now, we know that of the 2W you put in, only 1W is doing the actual work, and the rest are wasted. In this case, the efficiency is 50% for the entire circuit.


RE: so...
By Schmide on 4/25/2007 8:59:32 PM , Rating: 2
I really suck at this efficiency crap, but wouldn't any semiconductor be considered 0% efficient, because it basically does no work. When you say a light bulb is say 90% efficient, this means it converts 90% of its energy to light and 10% to heat. A semiconductor produces no light or kinetic movement so it is basically 100% inefficient, and contributes to a circuit exactly the same as a resistor.


RE: so...
By sxr7171 on 4/26/2007 1:40:24 AM , Rating: 2
Yes. Everything becomes heat. There is no "work" in the physical sense.


RE: so...
By dunno99 on 4/26/2007 9:57:37 AM , Rating: 2
Just like sxr7171 said, work is really just how you define it. If you wanted a heater, then a video card is pretty much 100% efficient (discounting generated electromagnetic radiation and whatnot).

In the case of a graphics card, we would consider the ideal amount of power to perform computations and information exchange at the specs of the graphics card, and compare that figure to what it's actually using.


RE: so...
By KCvale on 4/26/2007 10:55:07 AM , Rating: 2
An incandesent light bulb is only about 5% efficent (the light), the other 95% of the energy is wasted on heat.
Thats why the push for the world to change to the new coiled floresent bulbs.
They are over twice as efficent putting about 12% of the power to light using less power (a 20W floresent puts out as much light as a 75W bulb) and last 5 times longer.

A semiconductor is an "active" device, the "work" they perform is the millions of logic gates inside a chip being switched on and off millions of times a second and held in a given state (on or off) the entire time.

Resistors are a "static" device that simply restricts the flow of DC current by disapating it as heat (in series), or redirects the current flow (and voltage) when in parallel.

Nothing involving electricity is 100% efficient, not even wire.
Buy using better materials and using lower voltages semiconductors are becoming more efficient, but they always produce some waste in the form of heat.


RE: so...
By Howard on 4/25/2007 9:24:53 PM , Rating: 2
Jesus Christ. All the power used by a chip is turned into heat.


RE: so...
By johnsonx on 4/26/2007 5:23:37 PM , Rating: 1
Actually I think the poster's name was 'Goty'. I haven't seen Jesus Christ here in awhile.


RE: so...
By AntDX316 on 5/18/2007 4:10:33 AM , Rating: 2
the ultra is probably secretly 80 or 65nm


RE: so...
By 91TTZ on 4/25/2007 1:07:42 PM , Rating: 1
You can't compare 2 different chips like that. How do you know it will run cooler? One chip at 1 ghz may run cooler than another chip at 700 mhz. You just can't tell without seeing the results.


RE: so...
By AnotherGuy on 4/25/2007 12:50:37 AM , Rating: 2
Dude what r u talking about the gtx at stock is 1800 mhz... not 2000... theres some companies who did 2gig versions of the gtx but thats not stock speed... other companies will do the same with the XT and get it maybe close to 2000....

I think u got it wrong right now yacoub... ur thinkin XT FOR XTX maybe... coz at $450 gettin the performance of the GTX at $550.... I think you should be bashing at nVidia and their drivers instead of AMD... I know its late and all that but... It doesnt matter... If u waited for something better than GTX... than get the damn XTX for whatever cost it is... and keep quiet... damn whinners :P


RE: so...
By eman 7613 on 4/29/2007 7:29:07 PM , Rating: 2
cooler? my 8800gtx runs at 73c during gameplay underclocked, the damn thing makes an annoying whistle while it does anything that drives me nuts.


Grrrrr
By caboosemoose on 4/24/07, Rating: 0
RE: Grrrrr
By KristopherKubicki (blog) on 4/24/2007 4:57:27 PM , Rating: 4
Well, you can see the 2900 XT benchmarks we did yesterday ( http://dailytech.com/ATI+Radeon+HD+2900+XT+Perform... ), though I admit the numbers would be more relevant with higher resolution displays. Sven is working on the XTX cards right now, and afterward i'll make sure he retests the XT cards as well.


RE: Grrrrr
By CyborgTMT on 4/24/2007 10:18:28 PM , Rating: 2
He couldn't even figure out that this was about overclocking, let alone see the link in the article. I doubt he'll figure out how to follow your re-posted link as well, so I'll do him and other people who don't understand internets a favor.

From Anh's article:

quote:
Call of Duty 2 - Anisotropic filtering, 4xAA (in game), V-Sync off, Shadows enabled, a high number of dynamic lights, soften all smoke edges and an insane amount of corpses.
75.3 FPS

quote:
Company of Heroes - High shader quality, High model quality, Anti-aliasing enabled (in game), Ultra texture quality, high quality shadows, high quality reflections, Post processing On, High building detail, High physics, high tree quality, High terrain detail, Ultra effects fidelity, Ultra effects density, Object scarring enabled and the model detail slider all the way to the right.
92.1 FPS

quote:
F.E.A.R. - 4x FSAA (in game), maximum light details, shadows enabled, maximum shadow details, soft shadows enabled, 16x anisotropic filtering, maximum texture resolution, maximum videos, maximum shader quality.
84.0 FPS

quote:
Half Life 2: Episode 1 - High model detail, high texture detail, high shader detail, reflect all water details, high shadow detail, 4x multi-sample AA (in-game), 16x anisotropic filtering, v-sync disabled, full high-dynamic range.
112.0 FPS

Anh didn't say what Oblivion settings were used but it clocked in at 47.9 FPS.

Personal wish - any chance of Crossfire scores?


RE: Grrrrr
By CrystalBay on 4/25/2007 3:54:48 AM , Rating: 2
Hi Sven,

can we see any kind STALKER benches maxed out with game play observations?

thanks...


RE: Grrrrr
By JoKeRr on 4/25/2007 5:13:07 AM , Rating: 2
could you possibly comment on power consumption??


RE: Grrrrr
By caboosemoose on 4/25/07, Rating: 0
RE: Grrrrr
By GlassHouse69 on 4/25/2007 10:35:33 PM , Rating: 1
yeah synth benches are meaningless in a world of actual games.

no point in them. Not knocking people at anandtech at all! I just think they are a waste of time. people do synthetic tests to determine how good something is that they cannot test. Games exist, no need for synthetics. Plus, games are the endgoal and have uniform settings.

That being said, I have one request:

Show this card on real systems. I dont think people will be buying the chip you used. I know it stops the cpu from binding up the scores, but really, most people on here have a 2.2-2.4ghz athlon64 in a dual core or not even dual core yet configuration. Give us a try on the regular level 939 or even am2 I guess. That chip could give 10-15 frames at those resolutions to any vid card.

It is curious as to why people testing here havent gone the full route and tested all the games at various settings... kinda fishy. They could have the best spot on the web for tech geeks. Someone from ati asked not to do high res's in normal games until a new driver is out? hm


RE: Grrrrr
By Meaker10 on 4/25/2007 11:43:30 PM , Rating: 2
Are you two blind? While the numbers themselves are not so important the change IS, if a card scores 30% less when you up the resolution then its likely the card gets 30% less FPS in games when you up the resolution, it really is not hard is it?


R600XT bench request
By w0mbat on 4/24/2007 4:30:33 PM , Rating: 2
Could you do a bench with GPU DIP? This would be very nice :-)

You can download it here: http://rapidshare.com/files/26304213/gpu_dip.rar.h...

Big thx!




RE: R600XT bench request
By xG on 4/25/2007 7:40:47 AM , Rating: 2
RE: R600XT bench request
By KristopherKubicki (blog) on 4/26/2007 3:44:18 PM , Rating: 6
RE: R600XT bench request
By w0mbat on 4/26/2007 6:16:45 PM , Rating: 2
Which OS and which clocks? Thx!


But can you disply the frames?
By KCvale on 4/25/2007 12:58:32 PM , Rating: 1
I just have to laugh at some of you guys chomping to get super-fast video cards capable of high rez 100FPS displays, overclocking for even more, etc. You can’t display those frames.
Let me rephrase, your monitor can’t display that many frames per second.

ALL monitors, CRT or LCD/Plasma, have a max screen refresh rate in Hertz for each screen resolution.
60 Hz was the standard for years, and that means 60 frames per second, period.
If you are running your monitor at 60Hz your video card could be pumping out 200 FPS and you’re still only going to get 60FPS actually displayed.

Granted, no self respecting video gamer runs at 60Hz, but what is the most your monitor can put out?

My trusty old 21” Optiquest Q115 CRT can run 32 bit color at 1280x1024 @ 85Hz, and maxes rez wise at 1600x1200 @ 72Hz.
That means my GeForce 7950GT 512MB video card already puts out more than my monitor can display in the games I play like Flatout 2 (driving game, kinda like Dukes of Hazard on Steroids).

For you flat panel users you have yet another issue to content with besides refresh rate… Response Time.
Measured in milliseconds (ms), this is the time it takes the monitor to change the states of all screen elements.
1 second is 1000 ms. The max refresh rate you can run without ghosting is 1000/(you monitors ms speed).

For example, if your monitors response time is 16ms it makes no difference if it can go to 72Hz, all you can display without ghosting because the monitor couldn’t change all the pixels in time is 60Hz, hence 60 FPS.
And don’t think 2ms monitors are any better, in fact they are a scam.
To actually benefit from a 2ms response time your monitor would also have to be capable of a 500Hz refresh rate!

I think any flat screen with a lower refresh rate than 8ms (120 FPS) is just a waste of money, and if the monitors refresh rate for the rez you want to run to is like 72Hz, that is overkill too, you could run 10ms (100FPS).

My point is, before you start spending hundreds on a video card and overclocking already fast ones, take a serious look at how many FPS your monitor can actually display at a given resolution.
I think you’ll find that all these new video cards make for impressive benchmarks, but it’s still like putting mag wheels on a ’72 Pinto.
You aren’t going to actually see any difference until monitors catch up if you have a fast system.

KC
http://vales.com/elite




RE: But can you disply the frames?
By therealnickdanger on 4/25/2007 1:10:37 PM , Rating: 3
Nice breakdown of what we already know. That's not the point, however. These new GPUs offer, or at least have the potential to offer, other enhancements such as physics accelleration, HDMI audio output, DX10 effects, as well as super number-crunching for folks that want/need it.

The fact that a graphics card can reach 145fps in Quake4 might be irrelevant compared to the output capabilities of a monitor, but people want consistent fps without dips or drops. If this mean maintaining an average or max of 145fps for the sake of achieving 63fps minimum, so be it. In addition, not all games are created equal. Getting 145fps in Quake4 means nothing if you play R6:Vegas or Oblivion.

The hardware is justified upon personal preference and usage.


RE: But can you disply the frames?
By KCvale on 4/26/2007 10:22:43 AM , Rating: 2
Oh I agree, DX10 and HDTV support will be important eventually, just not yet.

My point is why overclock a video card that already puts out more solid FPS in the games you play than you can display?

Besides "Benchmark Braging Rights" and the "Because I can" argument, redlining an expensive components tolerance for no useable gain is just foolish IMO.


RE: But can you disply the frames?
By Goty on 4/25/2007 1:34:34 PM , Rating: 4
Being able to run at a framerate greater than your reresh rate is important to providing smooth gameplay. Enabling vsync when your card is outputting frames at a faster rate than your monitor can display them ensures smooth gameplay with no tearing.

I don't know about the rest of the gamers out there, but that's pretty important to me.


By togaman5000 on 4/29/2007 8:06:19 PM , Rating: 2
only thing i have to say:

minimum frame rate.

i use vsync (in my opinion it makes gameplay smoother /shrug), so my 8800GTS and my old x700 pro both could do 60FPS in world of warcraft. However, my minimum frame rate on my x700pro was in the 20s-with the 8800GTS, its 55. Its not the max frame rate that my eye notices and affects gameplay,its minimum, so i'd rather have a GPU that can keep the minimum FPS to a number my eye won't mind.


512-bit memory interface & OC'ing...
By EastCoast on 4/24/2007 8:15:00 PM , Rating: 2
Appears to be a good combination. I am starting to wonder why I need to wait for HD 2900XTX for at this point? There is no laundry list DX10 games on the horizon except for Crysis and Halo 2 (I had enough of that with the xbox) therefore if the xtx is faster you are actually paying for it. I mean right now my x1900xtx gets 7168 OC'd:
747/850
E6700 OC 3.0
Cat 6.3




RE: 512-bit memory interface & OC'ing...
RE: 512-bit memory interface & OC'ing...
By swtethan on 4/24/2007 9:10:31 PM , Rating: 2
The reason for a HUGE score is the quad core, that site used only a dual core. 3dmark06 uses quad cores very well.


By EastCoast on 4/24/2007 9:37:48 PM , Rating: 2
^^^
here you go (an example right here)
http://tinyurl.com/2az4vb
Intel Core 2 Quad 2667 MHz
3dmark06 11900
SM 2.0 4893
SM 3.0 4981
CPU 3956


By therealnickdanger on 4/24/2007 9:44:03 PM , Rating: 3
Halo 2 is not DX10, just Vista-exclusive. It's still only DX9. So yeah, Crysis is about it. Some current games will get updates though (Company of Heroes).


Hmm
By therealnickdanger on 4/24/2007 3:35:22 PM , Rating: 2
Part of me says wow: the 2900XT comes within spitting distance of the 8800GTX on the same platform. Most intriguing. However, you would think that with a 512-bit interface and a 200MHz core advantage, the 2900XT would surpass the 8800GTX. Doesn't the 2900XT surpass the 8800GTX in stream processors? I know the NVIDIA SPs are clocked higher though...

What resolution/settings were you testing? Anh scored an 11447 on his system @ 1280x1024.

I am stoked to see some 2900XTX numbers! Hurry up! :P




RE: Hmm
By KristopherKubicki (blog) on 4/24/2007 4:35:14 PM , Rating: 2
I asked Sven to add the higher resolutions, including the SM2.0 and SM3.0 tests.


RE: Hmm
By therealnickdanger on 4/24/2007 10:03:47 PM , Rating: 2
Wow, it's looking awesome now! Thanks for updating it, Sven.


RE: Hmm
By someguy123 on 4/25/2007 1:40:03 AM , Rating: 2
it could be driver related, as nvidia has had quite a bit of time to do specific tweaks for their 8800.

specs aren't always everything, but i agree this card should be out performing the 8800, even if its only doing so slightly.


Are these XT or XTX numbers?
By TOAOCyrus on 4/24/2007 5:13:16 PM , Rating: 2
Are these XT or XTX numbers?




RE: Are these XT or XTX numbers?
By PrezWeezy on 4/24/2007 6:32:08 PM , Rating: 2
He said XT and he is working testing the XTX right now.


I want one...
By PrezWeezy on 4/24/2007 6:33:33 PM , Rating: 2
Normaly I'm not a big ATI fan, but these numbers are very impressive. Maybe I'm missing it, but I haven't seen any release dates for this. I think I read it in one of the articles, but I can't remember which one now. Anyone have an idea on when this will be available through OEM's?




RE: I want one...
By EastCoast on 4/24/2007 9:41:33 PM , Rating: 2
suppose to be mid May from what I've read around the net. Not sure if it's true or not.


By Chocolate Pi on 4/24/2007 3:27:28 PM , Rating: 3
This is rather exciting; maybe we can get Kris to offer fabulous prizes to the first person to quell the angry mobs, lest they impale you guys on 1280x1024 monitors first.




impressive
By scrapsma54 on 4/24/2007 5:56:15 PM , Rating: 3
Seeing how much lower the xt is to gtx and able to make those numbers up there. Gtx has 256Mb memory more and thats all the gtx has against the xt. Xtx has double the memory and speed of xt and double the band of Gtx. Move aside Nvidia, I am not supporting you this series.




Any XTX results today?
By sol on 4/24/2007 5:02:15 PM , Rating: 2
It's 11 PM in euroland now, is it worth staying up a couple of hours and wait for some results? :-D

Please please please!!




CPU
By Hare on 4/24/2007 5:21:34 PM , Rating: 2
Could you kindly also include more detailed CPU info? Ok, it's a QX6800 but mhz info would be nice since it has a huge impact on 3dMark06 score. Looking forward to seeing more test results!




Temps?
By palindrome on 4/24/2007 8:08:54 PM , Rating: 2
I'm curious as to the temps when you were benching and what program you used to overclock. Thanks!




By TwistedA on 4/25/2007 3:48:00 AM , Rating: 2
I have request for benchmark SpecViewPerf 9.03

You can downloading it from here

http://www.spec.org/gpc/downloadindex.html

I still dont belive that game card outperforms quadro/fireGl..

Could you test new amd/ati Radeon HD 2900 XT and ATI Radeon HD 2900 XTX cards
with that benchmark please..
According to that site http://dailytech.com/Article.aspx?newsid=7043
Anh Huynh did some of these test in that benchmark but not all..
Could you benchmark all test in SpecViewPerf 9.03

for comparison look at that site
http://www.spec.org/gpc/opc.data/vp9/summary.html

Thanks




Thank you very, very much!
By Kougar on 4/25/2007 11:20:32 AM , Rating: 2
This was exactly the info I was curious to see! I was expecting some bigger gains honestly with a 100mhz Core increase...

What core clock does the XTX use, and how high does that one go?

Did you get Folding@Home to work on either card, and if so what was the PPD? ;)




"We are going to continue to work with them to make sure they understand the reality of the Internet.  A lot of these people don't have Ph.Ds, and they don't have a degree in computer science." -- RIM co-CEO Michael Lazaridis

Related Articles
















botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki