backtop


Print 122 comment(s) - last by VooDooAddict.. on Jan 9 at 3:21 PM

Details of AMD's next generation Radeon hit the web

Newly created site Level 505 has leaked benchmarks and specifications of AMD’s upcoming ATI R600 graphics processor. The upcoming graphics processor is expected to launch in January 2007 with an expected revision arriving in March 2007. These early specifications and launch dates line up with what DailyTech has already published and are present on ATI internal roadmaps as of workweek 49.

Preliminary specifications from Level 505 of the ATI R600 are as follows:
  • 64 4-Way SIMD Unified Shaders, 128 Shader Operations/Cycle
  • 32 TMUs, 16 ROPs
  • 512 bit Memory Controller, full 32 bit per chip connection
  • GDDR3 at 900 MHz clock speed (January)
  • GDDR4 at 1.1 GHz clock speed (March, revised edition)
  • Total bandwidth 115 GB/s on GDDR3
  • Total bandwidth 140 GB/s on GDDR4
  • Consumer memory support 1024 MB
  • DX10 full compatibility with draft DX10.1 vendor-specific cap removal (unified programming)
  • 32FP [sic] internal processing
  • Hardware support for GPU clustering (any x^2 [sic] number, not limited to Dual or Quad-GPU)
  • Hardware DVI-HDCP support (High Definition Copy Protocol)
  • Hardware Quad-DVI output support (Limited to workstation editions)
  • 230W TDP PCI-SIG compliant
This time around it appears AMD is going for a different approach by equipping the ATI R600 with less unified shaders than NVIDIA’s recently launched GeForce 8800 GTX. However, the unified shaders found on the ATI R600 can complete more shader operations per clock cycle.

ATI's interal guidance states the R600 will have 320 stream processors at launch; 64 4-way unified shaders only accounts for 256 of these stream processors.

Level505 claims AMD is expected to equip the ATI R600 with GDDR3 and GDDR4 memory with the GDDR3 endowed model launching in January. Memory clocks have been set at 900 MHz for GDDR3 models and 1.1 GHz for GDDR4 models.  As recent as two weeks ago, ATI roadmaps had said this GDDR3 launch was canceled.  These same roadmaps claim the production date for R600 is February 2007, which would be after a January 22nd launch.

Memory bandwidth of the ATI R600 is significantly higher than NVIDIA’s GeForce 8800-series. Total memory bandwidth varies from 115GB/s on GDDR3 equipped models to 140GB/s on GDDR4 equipped models.

Other notable hardware features include hardware support for quad DVI outputs, but utilizing all four outputs are limited to FireGL workstation edition cards.

There’s also integrated support for multi-GPU clustering technologies such as CrossFire too. The implementation on the ATI R600 allows any amount ofATI R600 GPUs to operate together in powers of two. Expect multi-GPU configurations with greater than two GPUs to only be available for the workstation markets though.

The published results are very promising with AMD’s ATI R600 beating out NVIDIA’s GeForce 8800 GTX in most benchmarks. The performance delta varies from 8% up to 42% depending on the game benchmark.

When DailyTech contacted the site owner to get verification of the benchmarks, the owner replied that the benchmark screenshots could not be published due to origin-specific markers that would trace the card back to its source -- the author mentioned the card is part of the Microsoft Vista driver certification program.

If Level505's comments seem a little too pro-ATI, don't be too surprised.  When asked if the site was affiliated in any way to ATI or AMD, the owner replied to DailyTech with the statement that "two staff members of ours are directly affiliated with AMD's business [development] division."


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By Comdrpopnfresh on 12/30/2006 11:41:45 PM , Rating: 2
Your average electrical outlet supplies 10 amps. Average 10 amps times average 115 volts= 1150 watts. I know they just came out with 1000watt psu's, but you figure if you have a system with so many parts that are able to draw that much power theres going to be a surge protector to plug it into, which are susceptible to efficiency issues, let alone the psu itself which cannot have 100% efficiency. Figure a loss of 20% from both (which would be extremely good), and you're left with 920 watts. Then you take out a 65 watt processor (which will undoubtedly be overclocked, so figure at least 120 watts, as efficiency drastically does goes down as cache speed and voltage increase). Now you have 800watts. Now take away another 60-150 for the overclocked motherboard and ram (now you have 650). This isn't assuming a high-end sound card with its own ram, or the essential raid array, fans or optical drives. So lets assume your left with 400 watts. And where is there room for 2 of these behemoths (and I say this because with this type of video card, we might see the first triple-slot design)? Also- people tend to plug in a monitor (23 watts with the most efficient of lcds), lamp, and any thing else into the outlet. Eventually the graphics industry is going to have to change its ways like the processor manufacturers did months back, or computers will have to be hardwired (or maybe plugged into) to a 220 line.




By ajfink on 12/30/2006 11:52:41 PM , Rating: 2
What is the TDP of the 8800? Does anyone know off-hand?

A 1kw PSU can handle two of these things fine in a modern system. Luckily most other computer parts have been DECREASING over the past two years in the amount of energy they require.


By JumpingJack on 12/31/2006 12:02:51 AM , Rating: 2
Careful, a PSU steps down the voltage, for example take the amperage drawn by a 12 volt rail, say at 60 amps, the power is 720 watts, but at the wall, 110 volts at the socket, corresponds to 6.5 amps.... care must be taken when thinking of power at the wall vs power at the PSU when calculating amperage. If a computer really drew 60 AMPs, it would require a 3 phase, 220 volt line to the socket :) .... most residential circuits break at 15 amps for typical usage, some break at 30 amps if it is to a utility room or garage.

Jack


By masher2 (blog) on 12/31/2006 12:09:52 AM , Rating: 2
> "Your average electrical outlet supplies 10 amps"

No, it supplies anywhere from 15 to 30 amps....though UL standards prohibit pulling more than 15 amps from any single plug (with the standard NEMA 5-15 socket). That puts a hard limit of 1500 watts on your draw.


By Spoelie on 12/31/2006 12:29:53 AM , Rating: 2
or 8 amps out of your general 220v socket
that's 1760w


By carl0ski on 1/1/2007 5:27:04 PM , Rating: 1
hmm you americans and the stupid single 220V in the laundry

Why? What is the point/benefit of 110V outlets

most countries are 220V
or 240V like here in Aus

PS do the 110 and 220v outlets have different plug formats?


By smackum on 1/1/2007 10:06:15 PM , Rating: 2
Because us Americans prefer not to fry our children when they stick a knife in the outlet ... which I did as a child and lived to tell about it, though my mother tells me the toaster on the same circuit did not:}

much of the world was on 110v until post WWII when Europe switched and others followed. Europe did it for efficieny of transmission (though 50Hz is less efficient then our 60Hz, so I'd love to know why they stuck with that - I do know they chose 50Hz originally because it worked better in the metric system - 60Hz was chosen by Tesla or Sprague (great engineers for Edison in the early days of EE but I don't know if there was a specific reason why) as higher voltages loose less energy in transmission. Since Europe was in ruins the cost of switching was low, but in the US it would have been very high so we didn't. We did, however, switch to 220V from the utilities pole transformer to the house, converting to 110v within though major appliances can use the 220v (hence, the 220v laundry and stove).

I've no idea how you Aussies ended up at 240v except perhaps you just like to be different and enjoy paying more for your appliances?

Yes, the plugs are different, even within the same voltage in different countries. I don't know, but I suspect that was a primitive form of protectionism for counties domestic manufactuers, which was all the rage back then.


By Comdrpopnfresh on 1/4/2007 5:10:31 PM , Rating: 2
voltage doesn't kill, amps do. So 110v vs 220 wouldn't make the difference in killing infants. :)

at least, voltage doesn't kill unless we're talking about such a potential difference you might see in a lightning storm...


By mindless1 on 1/7/2007 8:05:58 AM , Rating: 3
Wrong in this context. Voltage kills, amps do not. Yes it is the current doing damage but an infinite amount of current will mean nothing, while any country's AC power has more than sufficient current capability to kill so the remaining question is how much of that current ends up flowing through the person - determined by the VOLTAGE. So it IS the voltage that kills, the current is just the weapon per se.


By DocDraken on 1/5/2007 2:14:38 PM , Rating: 2
Actually that's a myth, probably created by american officials to avoid their population feeling bad about having an inferior electrical system. ;)
The real reason the US got stuck with 110V is, as you mentioned, the cost of converting the whole system and the early prevalence of electrical appliances in the US. A short sighted decision in my opinion, because it's been costing you ever since. Maybe also part of the reason for the much higher power use in the States compared to Europe.

Europe uses 230-240V like Australia.

I've gotten shocked by our mains (and so has lots of people I know) and yes it hurts, but it's not going to fry you. Both 110V and 240V are dangerous if you get shocked from arm to arm and the difference in danger is negliable. Skin resistance and mains voltage determines the amount of amps you get, not the total amps limit on the mains. So you might get slightly more amps with a 240V hit than 110V hit but since both can be lethal if you behave stupidly the difference has no importance. Another interesting phenomenon is that 110V has a higher tendency of locking your grip to the wire whereas 240V will cause a powerful muscle spasm that throws your hand off the electrical source.

What is important though is the huge difference in efficiency between 110V and 240V. Even more with the utility lines we use in Europe (3 phase 380V) for stoves, dryers etc.

Also ground fault interrupters that shut off power when it detects even tiny leaks to ground is mandatory in the EU (or at least here in Denmark).
This means that electrical fires (and some types of electrical shocks) caused by shorts and faulty insulation is much reduced. A cause of death and injury far far higher than from direct mains electrical shock.

To get back on subject, we enjoy having 2200-2400W available from each fuse group with only 10 amp fuses and 2860-3120W with 13 amp installations (which are the most common here now). :)



By Hawkido on 1/8/2007 10:48:54 AM , Rating: 2
The voltage has little to do with the muscle spasms. It is the Frequency of the voltage, the higher the frequency the more often the convulsions. The difference between 50 and 60 Hz is negligible. However, the lower the Hz the greater the tendency of the voltage to cut across the center of the conductor to get to the other side; therefore, the greater the chance of frying the ventricles of your heart. We "Americans" should be thanked. AC was rejected by everyone else (Hell, Tesla tried to give it away for free). Not only did we naturalize (make him a citizen) it's inventor and give him a chance, we pioneered AC and made it available to the world. Australia has only contributed the 'Roo burger and Crocodile Dundee.


By masher2 (blog) on 12/31/2006 11:54:44 AM , Rating: 1
Just curious who saw fit to downgrade this post, and the reasons why? Do you really believe the information is wrong...or are you just expressing outrage over my political beliefs, expressed in other threads?


By Comdrpopnfresh on 1/4/2007 5:07:25 PM , Rating: 2
my mistake, I forgot the 15 amp is the residential, and the 20 amp is the kind with the t-shaped vertical slot


By Pwnt Soup on 12/31/2006 8:19:03 AM , Rating: 2
small corection, the average home outlet is 15amp, at least in the USA thats the code. while some outlets are 20amp. those are usualy deticated for apliance use, such as window a/c units ect...


By nurbsenvi on 1/1/2007 11:57:52 AM , Rating: 2
Well, Thank god for the 1150w limit
because my computer, unlike hair dryers, typically needs to stay on for around 5~24 per day randering 3D!! imagine the electricity bill! no other electric appliance apart form aircon will match 1150Watt per hour x 24 hour power usage!!

Manufacturers will have to work their way around this limit somehow not us.


By Hare on 1/4/2007 2:33:28 PM , Rating: 2
Stop believing the PSU PR-machines. A high end gaming rig will eat around 250W (X6800 and 8800GTX). There is absolutely NO need for 1KW power supplies. It's just marketing BS.

Most gamers hardly break the 200W barrier with their overclocked E6600 and X1950XTX.


Different Strategy?
By THEiNTERNETS on 12/30/2006 11:05:55 PM , Rating: 1
"This time around it appears AMD is going for a different approach by equipping the ATI R600 with less unified shaders than NVIDIA’s recently launched GeForce 8800 GTX. However, the unified shaders found on the ATI R600 can complete more shader operations per clock cycle."

Forgive me but isn't actually EXACTLY THE SAME strategy that ATi used in their runoff with the 7-series, right? The whole less pixel pipelines, more power and so forth? Not to hate on ATi, this is exactly what I expected based on what they did with R580.




RE: Different Strategy?
By Spoelie on 12/30/2006 11:30:56 PM , Rating: 4
No it is exactly as they said it. The 7-series featured a maximum of 24 complex shaders, while the x19x0 series had 48 or 36 simple shaders. While the x19x0 was undoubtedly faster in the majority of benchmarks and it did have more shader processing power than the 7-series, the wins were close most of the time and the difference in shader power not that substantial - surely not double.

Now we have nvidia with a lot of simple shaders in the 8800 and ati with less, more powerful shaders in the R600. It is too early to tell which of the two choices provide the most aggregate shading power. But what we can say is that the roles from the generation before are definitely reversed.

(complex and simple used in relative terms here..)


RE: Different Strategy?
By THEiNTERNETS on 12/30/2006 11:46:33 PM , Rating: 1
Ah, okay. It makes more sense when you put it that way.

Then again, they are both unified shaders, so when you say "simple vs complex" are we supposed to assume that the real difference has to do with ATi's being 4-way? (what does that mean anyways?)

Seems like that "4-way" property is the key to 64 shaders in any way being able to match up against 128.


RE: Different Strategy?
By Furen on 12/31/2006 12:19:23 AM , Rating: 2
These 4-way shaders are really SIMD shaders that operate on 4 pieces of data at once. This means that you do not have as much granularity (which will lead to part of these units being idle at times) but they probably take less die space (and power) than they would as individual shader units. ATI is probably using the transistors saved on the shader units elsewhere, like improving its memory controller (which is a 1024bit/512bit monster), etc.

Nvidia has twice the amount of shader units and twice the clock speed (the shader units on the Nvidia side run at 2x+ the core clock) but they only work on a single operation at once.

I wouldn't label either of these approaches simple or complex since individual operations are simple for both of these approaches.


RE: Different Strategy?
By Spoelie on 12/31/06, Rating: -1
RE: Different Strategy?
By Spoelie on 12/31/2006 10:12:16 AM , Rating: 2
Now that I look at it, the specs say 128 shader (not data) operations per cycle. So while nvidia's shaders run over twice the clock speed as the rest of the gpu, there is apparantly some double pumping going on in ati's shaders as well, beside the fact that they're 4-way.

The only way to know for sure is get confirmation from ati i guess, and that won't happen before the NDA dates are reached.


RE: Different Strategy?
By Spoelie on 12/31/2006 12:24:23 AM , Rating: 2
The difference between the two is that Nvidia's shaders are scalar, i.e. they operate on a single 'number' at any given time. ATi's shaders are 4-way/Vect4 (vectors instead of scalars) in the sense that they can operate on 4 numbers at the same time (SIMD - single instruction multiple data). As such, 64 of ATi shaders AT THEIR PEAK should be equivalent to about 256 of nvidia's scalar shaders.

If you look at it only that way then nvidia wouldn't stand a chance. However, there's also the fact that nvidia runs those shaders at a lot higher clock than the rest of the gpu - we don't know how ATi's shaders are configured as of yet and at what clockspeed they are running.

There are also other factors playing, what is the workload, how many are processing vertex data and how many are processing pixel data, how well are the shaders adapted at their respective workloads, are they being well fed etc. etc. etc.

More details about ATi's configuration will probably only be revealed at the official launch date through tech docs.


RE: Different Strategy?
By otispunkmeyer on 1/2/2007 4:14:40 AM , Rating: 1
the way i understand this is

Ati's shaders being 4-way in theory gives them an upper hand (depending on clocks) like some one said...at their peak and maximum efficiency they should be equiv to 256 of nvidias scalar processors.

but!

the scalar processors NV has will be easier to utilize, they will be more efficient. so yeah Ati's can go 4 ways at once...but it might be harder to keep them working at their peak.

as always, there is usually more than 1 route to the same results, and this is all i see. i expect R600 to be on par with G80. 2 different methods, same outcome...with each having their own little pros and cons.

GDDR3 was expected too, GDDR4 has been used previously by ATi, but i dont think its ready just yet, and the 900Mhz GDDR3 modules seem to have no problems eclipsing 1Ghz either.

it will be interesting to see how the bandwidth increases play out. personally unless you are sporting dell's 30incher i dont think its going to provide much more performance and with shader effects getting more complex and more frequent perhaps memory bandwidth will be less important. i think the massive 115Gb/s bandwidth will go under-utilized for much of its early life.


RE: Different Strategy?
By MAIA on 1/8/2007 8:11:22 AM , Rating: 1
quote:
the scalar processors NV has will be easier to utilize, they will be more efficient.


This arguments holds no ground. You simply don't know if ATI shadding engine will be easier or more efficient.


RE: Different Strategy?
By Sharky974 on 12/31/2006 2:10:32 AM , Rating: 2
>>No it is exactly as they said it. The 7-series featured a maximum of 24 complex shaders, while the x19x0 series had 48 or 36 simple shaders. While the x19x0 was undoubtedly faster in the majority of benchmarks and it did have more shader processing power than the 7-series, the wins were close most of the time and the difference in shader power not that substantial - surely not double.

Now we have nvidia with a lot of simple shaders in the 8800 and ati with less, more powerful shaders in the R600. It is too early to tell which of the two choices provide the most aggregate shading power. But what we can say is that the roles from the generation before are definitely reversed. >>

This isn't true at all. ATI, you can look this up, stayed with the exact shader pipe configuration they used in with the X800 series. In other words, the 48 shader pipes in R580 are exactly the same as the 16 shader pipes in X800.

And if you'll recall, the 16 pipe X800's were a good match for the 16 pipe 6800's. Pipe for pipe clock for clock, they were almost equal. Therefore one ATI pipe IS a good match for one Nvidia pipe. Although I believe one Nvidia pipe to be slightly faster, we're talking on the order of 5-10% here.

What everybody cant figure out is why the "48 shader" (really, 48 shader pipe, of the exacts same configuration as in the R420 series) R580 didn't blow the 24 pipe G71 away. Well, the reason almost certainly (Xbit labs mentions this in there R580 benchmarks a lot) is that the 580 series is heavily texture limited. It has only 16 (dedicated) TMU's. G71 textures with one of it's shader alu's, so basically has 24.

So basically it doesn't matter how much shading power the R580 has, it's bottlenecked by textures/fragment throughput.

A easy way to see this, among many, is to look at the X1800XT, which at just 16 shaders/pipes, competed well with the 24 pipe 7800GTX. Why? It had 16 TMU's too. The only thing a R580 has over R520 is, 3X the shader power, but the texture throughput remains the same. If the game isn't shader bottlenecked but rather texture bottlenecked, the R580 would theoretically perform no better than a R520.

Another easy way to tell is look at the X1800GTO (a 12/12 TMU/SHADER config), and how bad it destroys the ill fated X1600XT (a 4/12 TMU bottlenecked config). Both have the same number of shaders and similar clock, but the GTO has 8 more TMU's and blow away the X1600XT.

This was all part of ATI's bright idea that TMU's/Shaders needed to be in a fixed 1:3 ratio as games became more shader heavy. (Hence 16:48 on R580, 4:12 on X1600XT, etc. It was a colossal debacle IMO. And basically while ATI did okay performance wise, the real key is to remember their dies were about twice as big as an Nvidia die for comparable performance, meaning the design is terribly innefficient, and much more costly (to ATI, at least) for similar performance as competing Nvidia parts of the time.

The R600 at least appears to end all that nonsense, anyway. In fact early measurments put the R600 die at a bit smaller than G80, with competitive performance if these benches/rumors are at all belieavable. Granted it's on 80nm, but that doesn't make a huge difference.


RE: Different Strategy?
By Sharky974 on 12/31/2006 2:33:26 AM , Rating: 1
>>Now we have nvidia with a lot of simple shaders in the 8800 and ati with less, more powerful shaders in the R600.>>

Just to clarify, yes I agree with this part. I dont agree however with the statement that the G71 (7 series) featured "complex" shaders while the R580 (X19XX series) featured simple ones.

I would agree with a generic statement that ATI has featured more raw shading power in both cases (last gen and R600/G80), however, even if last time they didn't neccesarily take advantage.


RE: Different Strategy?
By Spoelie on 12/31/2006 9:47:41 AM , Rating: 4
Your claim that the shaders are equivalent through the generations is completely baseless. An x800 shader is PS2 to start with, while r5x0 series was redesigned for PS3. Also, the x1600xt already had the 'simple' shader principle, while x1800 gpus had more powerful singular shaders - that's why the x1800gto 12 shaders were faster. As such, the x1900 is not at all 3x x1800 but 4x x1600. It has nothing to do with TMU's.

You have to think of the R520 as a completely seperate generation. The only reason why it came out together with the middle/value range was because of it's tremendous delays, so much that the line after it (x1300/x1600/x1900 with redesigned shaders) was already finished when it came out. Another telltale sign is that the r520 doesn't support all the features as the other members of the x1xx0 family, be it value or high end.

And yes, I did look it up.


RE: Different Strategy?
By Sharky974 on 1/1/2007 6:59:02 AM , Rating: 3
Are you crazy? X1800 and X1600XT had the EXACT SAME shader pipes! And 12 apiece. The ONLY major performance difference was TMU's.

The X1800XT had the EXACT SAME shader pipes as X1900 series as well, n fact they all do. The same major-mini-alu setup. ATI didn't want to rewrite their compiler.

The part about X800 pipes isn't even relevant. I dont think shader ALU's have anything to with PS3.0, that's more about support for dynamic branching and other features elsewhere on the chip (probably requires 32 bit ALU's as well..no biggie).

"And yes, I did look it up."

Uh where perchance? Link? There will be no link, because you're dead wrong, it's only a matter of how much you want to squirm.

I dont really have time right now to hunt down a bunch of proof, but if you come back in a couple days I'll do it then. In the meantime, I invite you to find me one source that X1600 XT has "simpler" shader than X1900/X1800GTO etc. Quite frankly, I'm 100% certain you cannot.

X1800GTO was simply a qaud disabled X1800 (Which was ALSO a 16 pipe card..yet beat the 24 pipe 7800 GTX...simpler shaders my ass). X1600XT was the same design, exact same shader pipes, but built from the ground up to be smaller for the mid range.

And again, X1900 has 48 of the EXACT SAME shader pipes that X1800 has 16 of.

ATI did all this on purpose..they wanted to be able to scale shaders easily and fell in love with the 1:3 ratio..which they considered the future. Unforunatly it sucked.

The R520 and R580 are the SAME DESIGN just scaled with minor changes. The X1600XT is based of the R520/80 type design, as is every ATI chip since the X800 range.

R580 would have been a kickass card, that blew away Nvidia, if it only had 24 or even 32 TMU's, to relieve that bottleneck. It certainly had an overdose of shader power..


RE: Different Strategy?
By gibletsqueezer on 1/2/07, Rating: 0
RE: Different Strategy?
By Spartan Niner on 1/2/07, Rating: 0
RE: Different Strategy?
By gibletsqueezer on 1/2/07, Rating: 0
RE: Different Strategy?
By MAIA on 1/8/2007 8:25:06 AM , Rating: 2
As can be seen from the diagram above, the primary difference between R520 and R580 lies in the number of pixel shader units available to each of them, with R520 having 16 pixel shader cores, while R580 triples that number. As is the case with all of the R5xx series, each pixel shader core consists of:

* ALU 1
o 1 Vec3 ADD + Input Modifier
o 1 Scalar ADD + Input Modifier
* ALU 2
o 1 Vec3 ADD/MUL/MADD
o 1 Scalar ADD/MUL/MADD
* Branch Execution Unit
o 1 Flow Control Instruction

The net result is that R580 contains 48 Vector MADD ALUs and 48 Vector ADD ALUs operating over 48 pixels (fragments) in parallel. Along with the pixel shader cores, the pixel shader register array on R580 has also tripled in size in relation to R520, so that R580 is still capable of having the same number of batches in flight.

The R580 diagram above isn't entirely accurate, as it indicates that there are 12 different dispatch processors for R580, where R520 has 4 - this is, in fact, not the case. As with RV530, R580's "quads" (or lowest element of hardware scalability in the pixel pipeline) have increased such that they can handle three quads in the pixel shader core, but they do so by operating on the same thread. The net result is that R580 still contains 4 different pixel processing cores, ergo only 4 dispatch processors, with each core handling up to 128 batches/"threads". As a result, R580 is still handling a total of 512 batches/"threads" as R520 does. Each of the batches in R580 consist of a maximum of three times as many pixels (48), so that they can be mapped across the 12 pixel shader processors that exist in each of the 4 cores, with the net result being that R580 can have 24,576 pixels in flight at once. Note that because this is still based around 4 processing cores, the lowest level of shader granularity is likely to be 12 pipelines, so if ATI releases parts that's had failures within the pixel pipeline element of the die the next configuration down would likely be 12 textures, 36 shader pipelines, and 12 ROPs.

ATI have also increased the Hierarchical Z buffer size on R580, which can now store up to 50% more pixel information than R520 can, allowing for better performance at even higher resolutions. However, other than that most of the other elements stay the same, shader wise, with R580 still having 8 vertex shaders, single Z/Stencil rates (unlike RV530) and still continuing with 16 texture units serving all 48 shader processors.

http://www.beyond3d.com/reviews/ati/r580/index.php...


RE: Different Strategy?
By MAIA on 1/8/2007 8:27:34 AM , Rating: 2
Fetch4 is actually not just implemented within R580 but RV530 and RV515 as well, although curiously not R520. Because of the relatively low shader capabilities of R520, in relation to R580, it's more likely to be shader bound on operations such as these anyway, so the increase in the sample time is less likely to be an issue. With R580 though, as it has such a high math capability in relation to its number of texture samplers it's more important that its texture utilisation is optimised, so wasting 3 cycles on single precision texture formats is going to bottleneck it more.

**************

Can be read on the next page


The gauntlet has been thrown on you Kubicki
By Sharky974 on 12/31/2006 2:44:34 AM , Rating: 2
A member of B3D site staff has said that he is absolutely sure your/level 505 specs are wrong. Thoughts?




RE: The gauntlet has been thrown on you Kubicki
By Sharky974 on 12/31/2006 2:45:03 AM , Rating: 2
Err, meaning R600 specs of course.


By Sharky974 on 12/31/2006 2:49:17 AM , Rating: 2
And he's saying that not in a "these cant be right" way but in a "I have the real specs" way.


By KristopherKubicki (blog) on 12/31/2006 2:57:21 AM , Rating: 2
The specs we have are from three weeks ago. We stopped publishing GPU roadmaps because the manufacturers change the numbers so often. Some of our information doesn't line up with theirs -- we were told the GDDR3 version was not even going to ship.

That being said, Dave would know more about the card than I do given the fact he works for them. Needless to say, I throw the guantlet back at them ;) I'll be publishing my take on a card as soon as I get my hands on one -- hopefully they will too.


RE: The gauntlet has been thrown on you Kubicki
By Sharky974 on 12/31/2006 2:59:24 AM , Rating: 2
Not Dave. I dont think he is even site staff anymore since stepping down. Whoever it is simply has a forum handle of The Baron, and a site staff tag.


RE: The gauntlet has been thrown on you Kubicki
By Sharky974 on 12/31/2006 3:04:10 AM , Rating: 2
"I'll be publishing my take on a card as soon as I get my hands on one --"

Does that mean you dont sign NDA's?


RE: The gauntlet has been thrown on you Kubicki
By Sharky974 on 12/31/2006 5:48:56 AM , Rating: 2
Other B3D players/mods are now doing the "well, some of the specs could be right by accident" bit.

They're just mad because it shows ATI faster, and they hate that..so they attack the source, they do this shit every time..


By MartinT on 12/31/2006 5:57:21 AM , Rating: 2
quote:
They're just mad because it shows ATI faster, and they hate that..so they attack the source, they do this shit every time..

I think that's the first time in quite a while I've seen B3D accused of being anti-AMD. Usually, those claims fly the other way 'round.


By zsouthboy on 1/5/2007 12:05:09 PM , Rating: 2
I happen to spend a LOT of time over at B3D, and can tell you that they are some of the most IMPARTIAL people on the internet. Seriously.

The majority of us are excited about stuff getting faster, not flaming one company's products we don't align with.

Please do not resort to name calling if you disagree with something that is said.


By KristopherKubicki (blog) on 12/31/2006 12:26:35 PM , Rating: 3
quote:
Does that mean you dont sign NDA's?

Never have and never will at DailyTech aside from the occasional overnight stuff.


RE: The gauntlet has been thrown on you Kubicki
By elegault on 1/1/2007 9:25:14 PM , Rating: 2
I guess that's why other review sites are labelling "you" more notorious than the Inq.


By nurbsenvi on 1/3/2007 3:35:28 AM , Rating: 2
I think the inq is on the line of insanity...


THE GUY IS LYING...
By tognoni on 12/31/06, Rating: 0
RE: THE GUY IS LYING...
By Slappi on 12/31/06, Rating: 0
By KristopherKubicki (blog) on 12/31/2006 12:29:24 PM , Rating: 2
quote:
Yah I was just about to buy a new 8800gtx now I am going to wait.... if this guy turns out to be lying I will spend at least the next few months bashing his site.

Well, he openly states AMD employees are on his staff. I don't really know how much staff he could have, but that statement should affect how you feel about these benchmarks.


RE: THE GUY IS LYING...
By masher2 (blog) on 12/31/2006 12:40:18 PM , Rating: 1
> "This is the email I just sent him, I am visibly irritated..."

If you're getting emotional over video card reports, you might want to relax and get out a bit more :) So what if you buy the 8800? A faster card will come out and beat it...if not in one month, then in 4-5 for sure.

> "No single graphics company has ever introduced 2 graphics generations in one year..."

Actually, this used to be quite common, back in the early days of NVidia. As technology matured, the cycles slowed down. Now, I'm not saying ATI is likely to do so this year. In fact, I find it rather unlikely. But the fact remains, its been done before.


RE: THE GUY IS LYING...
By arturnowp on 12/31/2006 2:49:58 PM , Rating: 2
New card and new architecture is diferent thing. It possible that there will be G81 or G85 by the end of 2007, but not the whole new GPU like G90 not to mention R700 or something. It happens every 2 years or so.


By KristopherKubicki (blog) on 12/31/2006 2:58:07 PM , Rating: 3
G80 was to Nov 8, 2006
G70 was June 22, 2005
NV40 was April 15, 2004
NV30 was Jan 27, 2003


RE: THE GUY IS LYING...
By masher2 (blog) on 12/31/2006 6:39:37 PM , Rating: 2
> "New card and new architecture is diferent thing"

I know that. However, in the early days of GPUS, new architectures came much faster. The Riva TNT (NV4) came out in late 1998. The TNT 2 (NV5) was early 1999. The GEForce 256 (NV20) was late 1999.

The OP's comment that "no one has ever" released two new architectures in less than a year is just plain wrong.


RE: THE GUY IS LYING...
By coldpower27 on 1/1/2007 2:26:24 PM , Rating: 2
The TNT and TNT2 would be considered 1 full generation.

The Geforce 256 with Geforce 2 GTS and Ultra being the refreshes and all be considered the new architecture in relation to that.

I would lump Geforce 3 and Geforce 4 TI technology together, then Geforce FX on it's own, the Geforce 6 and then the Geforce 7 though the Geforce 7 Series didn't add that much functionality wise compare to the Geforce 6's it was mainly performance improvements and cost benefits.


RE: THE GUY IS LYING...
By StevoLincolnite on 1/6/2007 10:50:32 AM , Rating: 1
The TNT2 core is almost identical to its predecessor the RIVA TNT, however updates included AGP 4X support, up to 32MB of VRAM, and a process shrink from 0.35 µm to 0.25 µm.

So yes I would consider it as a full Generation.

The same would be for the Geforce 256 and Geforce 2.
The Geforce 256 Was severely Memory bandwidth constrained.

And the Same for the Geforce 3 and 4. (Except for the MX cards, they were based on the Geforce 2 core).


The Geforce 2 Ultra could even out perform the early Geforce 3 Models that were released, That is until the Ti500 came about.

Oh and the Geforce 2 had Pixel shaders :) But they were fixed function, thus rarely used outside of tech demo's.

The Geforce 3 however Added Pixel shaders and vertex shaders. (1.1 I believe?) And LMA, And the last change was the Anti-Aliasing which went from supersampling to multi-sampling.

Nvidia never released a low-cost version of the Geforce 3, Geforce 2's were still selling like hotcakes.

The Geforce 4 mainly just brought Higher clock speeds, Improved Vertex and Pixel shaders, and some more re-modeling of the Anti-Aliasing.

The Geforce 5/FX was the first time Nvidia combined the 3dfx team, and there own to design the chip.
The only way Nvidia was able to remain competitive to the Radeon series, was to Optimize the drivers to the extreme, They used these "Optimizations" to reduce tri-linear quality, Shader "Hacks" and making the card not render parts of a world. The drivers looked for the software and would apply the aggressive optimizations, And people noticed that the FX picture quality was considerably inferior to the Radeon.

The FX would have been a stand-alone series, And I think everyone wishes for it to be forgotten as a mistake...

The Geforce 6 and 7 are also similar, Mainly just adjustments to the amount of pixel pipelines, and clock speeds.

The Geforce 8 however... Well cant say much the entire line-up hasn't been released, And I am sure everyone has been watching and reading it like a hawk :)

All in all, I still Like Nvidia's drivers better, they have support right from the TNT to the Geforce 8. Thats 10 Generations of 3D accelerator support right there!
And the drivers don't feel so.... clunky. Mind you they have come leaps and bounds from the Radeon 8500 Days of hell.

All in all, Nvidia have used parts of all they're graphics card line-ups. The Geforce 6 was based on the Geforce FX, but changed alot of stuff internally. When it was first released, It still used the same manufacturing process as the FX. Nvidia had alot of work to Turn the FX into what it was meant to be. The Geforce 6.


RE: THE GUY IS LYING...
By psychobriggsy on 1/1/2007 7:01:10 PM , Rating: 5
Nice subject, a definite statement of truth in capitals ...

When really it is a rant regarding your opinion. The only valid point is the fact the site is brand new and thus untrustable, but you always take things like this is a pinch of salt and specifications change.

ATI's drivers are high quality now - it isn't 2002 anymore - I rate your post down by persisting the ATI driver quality myth. And yes, ATI's drivers should be less optimised than nVidia's G80 drivers, as G80 has been on the market for a few months already - a good thing for nVidia.

Holding off your purchase will save you money or get you getter quality, or you could just put down the money and have a G80 tomorrow and ignore the inevitable improved products coming in the near future.


Great
By Tsuwamono on 12/30/2006 8:00:19 PM , Rating: 1
Now we know Nvidia fanboys are going to say AMD staged this. Either way, AMD has never(to my knowledge) purposely lead us astray and neither has ATI. IMO the R600 is going to rock the 8800 no matter what.




RE: Great
By kilkennycat on 12/30/2006 8:11:50 PM , Rating: 4
You have a very short (ATI-specific) memory. Remember the X1800-series debacle ? Delayed for months after announcement. Obsolete 2 months after real shelf-availability; replaced by the X1900-series. Of course, no mention of their imminent replacement by the X1900... I wonder how many were burned by their X1800-series purchases ?


RE: Great
By Dactyl on 12/30/2006 8:29:02 PM , Rating: 2
One reason to believe this is true is the GDDR3/GDDR4 detail. ATi and Nvidia always try to release a high end card first followed by slower cards, so enthusiasts can run out and be the first in line to buy the new card (rather than needing to trade up every 3 months to stay on top). Normally these companies hate to admit a better version is just around the corner (of course, it would be hard to hide GDDR4 versions of the card when ATi said all along it would be releasing that)


RE: Great
By tuteja1986 on 12/30/06, Rating: -1
RE: Great
By atwood7fan on 12/31/2006 12:00:23 AM , Rating: 2
Am I the only one that noticed how the x1950xtx cf did surprisingly well vs. the 8800gtx? Seems a little fishy to me...


RE: Great
By Goty on 12/31/2006 12:07:35 AM , Rating: 1
After reading the whole article, I'm a little inclined to say that it's not as biased as some people are trying to make it out to be. There are a few times throughout the article where the author cites that the G80 card was producing a better image than the R600 card.

I'd still take the numbers with a grain of salt, but maybe a bit less than might be customary.


RE: Great
By oddity21 on 12/31/2006 12:24:43 AM , Rating: 2
Numbers look very promising, although 8800GTX numbers seem to have been 'watered down' a little. My rig (E6600, 8800GTX) gets over 80fps in the performance test with exactly the same settings, without fail.


RE: Great
By leexgx on 1/1/2007 11:45:02 AM , Rating: 2
one 8800 GTX is cheaper then 2 x1950 in CF plus the mobo (8800 gtx on its own does not need SLI mobo just an pci-E 16 lane one so basicly any intel/nvidia mobo will be fine)

from some of the tests x1950 cf was matching

the R600 will be intresting


RE: Great
By yxalitis on 1/2/2007 12:53:51 AM , Rating: 2
OK, as a proud owner of an ATI X1800 XT, i have been nothing but happy with my purchase. So what if the X1900 followed it shortly? It was only 10% faster..the X1950, another 10%, these increases do not constitute the phrase "made redundant" Now, the 8800 GTX did make the 7900 reduntant, as it is approximately twice as fast. Really, every new release promises a massive increase over the previous generation, but 9 times out of 10, is only marginally faster. The greastest single jump I can think of was the voodoo 2, which was a revolution at the time. The new DX 10 cards seem to offer a similar performance boost over the previous generation, but the price difference reduces the signifiance of that impact.

Oh, and ATI drivers have been as good if not better then nVidia's for the best part of 2 years, seriously, you need to get a fresh perspective, the "ATI drivers suck" adage belongs to the days of the RATI rage. (Excepting Open GL, which only became comparable to nVidida's recently, but since the number of Open GL games lags far behind Directx, I don't count is as significant...Doom 3/Quake 4 fans may disagree)
Oh, and I own both ATI and nVidia, in my PC's, so I am no "ATI fan boy" If the R600 procves not to meet muster, I'll be a happy 8800 GTS purchaser. But right now, i see no reason to upgrade my "reduntant" X1800 XT!


Which one is it, I wonder?
By masher2 (blog) on 12/30/2006 9:53:35 PM , Rating: 2
> "The implementation on the ATI R600 allows any amount of ATI R600 GPUs to operate together in multiples of two..."

Multiples of two...or powers of two?




RE: Which one is it, I wonder?
By KristopherKubicki (blog) on 12/30/2006 10:07:24 PM , Rating: 2
It's powers. I fixed the article.


RE: Which one is it, I wonder?
By OddTSi on 12/30/2006 10:38:43 PM , Rating: 2
It should be 2^x, not x^2.


RE: Which one is it, I wonder?
By TechLuster on 12/30/2006 10:41:27 PM , Rating: 2
It should also say 2^n instead of x^2, and FP32 instead of 32FP.


Gforce 8900GTX
By nurbsenvi on 12/31/2006 10:26:33 AM , Rating: 2
The moment I saw 8800 numbering I knew
nvidia is cooking 8900 to compete with R600




RE: Gforce 8900GTX
By EastCoast on 12/31/06, Rating: 0
RE: Gforce 8900GTX
By nurbsenvi on 12/31/2006 2:43:33 PM , Rating: 2
ALL I care is price drop of DX10 Graphics cards.

Even with R600's medicore improvement over 8800 seris, just because it's throwing a good competition the price drop will accelerate.

And that's all that matters.

And these cards are DX10 cards!
That means I will be able to play Crysis and other DX10 games.

Forget about 1950 in CF being +8frame faster
it will not do DX10 special effects no matter what.

My next upgrade is due very soon and I set the goal at
UT2007 at 30fps at 1920x1200

And I won't be able to achive that within my budget if R600 didn't match G80 GPUs cuz it will keep the 8800 price high for longer period of time.

That's why 5fps matters.


RE: Gforce 8900GTX
By leexgx on 1/1/2007 11:59:31 AM , Rating: 2
the X1950 should use 2x PCI-E power plugs as it is useing more power then the Specs alow thats why you need an power supply that does not folow the Specs in atx 2.0 ? ot it auto shut down

nvidia have made the 8800 GTX with 2 plugs so that you do not need an crossfire PSU (the PSU would auto shut down with one PCI-E power connecter{max 20 amps per cable})


RE: Gforce 8900GTX
By Targon on 1/2/2007 7:54:44 AM , Rating: 2
Perhaps you missed the point about DirectX 10 support. We also don't know about how performance will be with a release version, not to mention drivers that don't have a ton of debug code in them(which slows down performance).

As for power, have you looked at how much power the 8800 draws?


RE: Gforce 8900GTX
By Goty on 12/31/06, Rating: 0
Finally some good news for AMD/ATI?
By psychobriggsy on 12/30/2006 7:56:23 PM , Rating: 2
This is good stuff if valid. Nice to see ATI get a decent design out of the door for their next architecture - hopefully it will be nicely amenable to cut-down variants with 32/16/8 4-way shader processors for those of us who have mortgages in expensive areas!




RE: Finally some good news for AMD/ATI?
By JumpingJack on 12/30/2006 11:44:08 PM , Rating: 2
The irony is that if R600 really beats out G80, then you will need a C2D to release the full potential of the card as G80 throws the bottleneck to the CPU on AMD based systems.


By Targon on 12/31/2006 8:06:44 AM , Rating: 2
Now that the AMD/ATI merger is finalized, it may be that a lot of work is going into the chipset for the K8L launch(expected mid 2007). Due to the architecture changes in K8L compared to the current K8, if we are looking at a March release date(GDDR 4 version), that isn't too far from that new CPU launch.

Now, AMD MAY be trying to time the release of the K8L based processors, a chipset for them, and the R600 for a "total platform" release type of thing, without trying to force people to buy all three(the way Intel does).

This is speculation, but considering the topic of the thread, it fits.


8 to 42%
By ADDAvenger on 12/30/06, Rating: 0
RE: 8 to 42%
By retrospooty on 12/30/2006 8:09:20 PM , Rating: 2
8 to 42% is a pretty huge amount of better, assuming it holds up to be true. In the end you cant really trust anything until its released with independant benchmarks.

Hopefully Nvidia will do something with thier drivers to improve performance as well. They have done it many times in the past.


RE: 8 to 42%
By Dustin25 on 12/31/2006 3:52:52 AM , Rating: 2
At this point any increase in performance of the 8800 would go largely unnoticed by everyone but the benchmarkers among us. You could largely decrease the 8800's performance and it would still laugh at anything you threw at it. This may not be true when dx10 games come to market, but as of now graphic hardware seems to have far outpaced the games. Unless dx10 stresses these new cards (g80/r600), any performance increases seem almost pointless until game technology catches up.


RE: 8 to 42%
By peternelson on 1/1/2007 8:09:12 PM , Rating: 2
Hopefully ATI can do something with their lame linux drivers too as they make even a more powerful gpu run like teh treacle.

Without that improvement I will definitely be buying the Nvidia 8xxx to run in a Vista/linux dualboot system.


so annoying...
By ali 09 on 12/31/2006 1:00:12 AM , Rating: 2
its so annoying not being able to tell which will be better, what future models will be like etc. lets face it. people thought future-proof is the g80. now, these benchmarks suggest otherwise. thats the technology market. we just have to look as far as we can into the future(which isn't far). my computer, which is 6 years old (yes it is very old) still works perfectly well, not one break and only 1 add in (9600xt 256mb). the market is just going to get faster and faster until you can never have the top product. if you wait (like me) you end up waiting for ages. it sucks.




RE: so annoying...
By ahkey on 12/31/2006 2:24:54 AM , Rating: 2
That's why most/sensible new buyers and upgraders buy the latest product -2; you don't waste as much money, don't get as many problems, and still have enough power to run the latest whatnots.


RE: so annoying...
By ali 09 on 1/1/2007 9:39:11 PM , Rating: 2
you get more problems when you get the latest product eg the g80. its the first edition. why else is there rev 2? i agree with what your saying, but sometimes it is best to wait until a revision comes out like the g80 on 80nm so less power + cooler. then by the time a revision comes out, there is another 1st gen.


Too many things are happening too fast!
By nurbsenvi on 12/31/2006 10:05:27 AM , Rating: 4
There is just too much big changes happening too fast!

Single Core to multi core (this really only took few months)
1080p arrives to mainstream
PS2 to PS3
DVD to Blu-ray
32bit to 64bit
WinXP to Vista
DirectX 9 to 10

I mean VHS lasted like 15 years? DVD feels like it only lasted 5 years!
According to WIKI DVD only took over VHS in 2003 and it's already facing the downhill!

Ahhhhaagggggh!! Can take it anymore!!!!!!!!!




By Spoelie on 12/31/2006 10:13:57 AM , Rating: 2
Don't think DVD is dead just yet, it'll prolly take at least another 5 years before bluray/hd-dvd sales will surpass DVD.


Oblivion benchmarks tell it all...
By Nightmare225 on 12/31/2006 1:15:19 PM , Rating: 2
Oblivion benchmarks are BS. Anandtech could pull off more frames at higher resolutions and I can too on my 8800GTX.

Take this preview with a grain of salt.




RE: Oblivion benchmarks tell it all...
By Ard on 12/31/2006 3:02:35 PM , Rating: 2
Not to mention B3D has pretty much completely shot this joke of a "review" down. Specs are wrong and it looks like they completely lifted the G80 benches from another site (which doesn't even include the fact that the 8800 benches are definitely lower than those found on other sites).


By Targon on 1/2/2007 7:29:48 AM , Rating: 2
Who cares if the numbers are lower or higher than what you see on other sites? Honestly, a proper benchmark keeps the settings as close to identical as possible. If you see similar benchmarks between sites, that implies that every setting from every site is identical, with very similar hardware.

Now, there are a lot of things about the information that I find questionable, but when people feel that the benchmark results for a Geforce 8800 are lower than what you find at other sites, you NEED to compare the hardware used in the comparison, as well as the settings. If the 8800 and R600 tests were done on the same system, with the same Oblivion settings, then it would be a fair comparison.


AEG called, they want their marketers back.
By Chocolate Pi on 12/30/2006 7:56:40 PM , Rating: 2
Alright, so this isn't nearly THAT underhanded... It is just AMD/ATI releasing preliminary numbers in the guise of an enthusiast news site. A professional yet inside look, combined with glowing optimism towards future drivers and Vista makes it obvious.

At least the card DOES sound good...




RE: AEG called, they want their marketers back.
By mjrpes3 on 12/30/2006 8:36:15 PM , Rating: 2
Not to mention that if this is site is indeed backed by AMD/ATi, at least there is some attempt to be truthful to all that is good and bad about the card, as on page 8 where it says,

"Regarding the picture quality, the graphics appeared substantially clearer on the GeForce 8800GTX card; however, when AA and AF were turned off, the picture quality was similar to R600’s picture quality."


By Goty on 12/31/2006 5:28:47 PM , Rating: 1
You're taking that comment out of context. This was only said when testing BF2.


FPS are all nice and dandy, but ...
By MartinT on 12/31/2006 4:29:18 AM , Rating: 2
Even if the numbers were right, the article lacks the meat I'm looking for. What about IQ-enhancements and the sample's power usage?




RE: FPS are all nice and dandy, but ...
By Sharky974 on 1/1/2007 7:00:15 AM , Rating: 1
I think Apple (or some other politically correct company) has a card for you. Probably in pink. So go find that, and get off this board for real GFX card men.



By MartinT on 1/1/2007 12:06:51 PM , Rating: 2
Huh? What does the interest in improvements beyond raw performance numbers have to do with Apple, political correctness or the color pink?

And I'm a real GFX man, thank you very much, thanks largely to my 8800 GTX penile enlongement device.


Great...if it is true
By carage on 12/30/2006 8:39:35 PM , Rating: 2
8~42%? Well, it is definitely a great card if it is true. However, we will have to wait until the cards actually appear. Screenshots cannot be published because of special markers? Hmm...that sounds very very suspicious.
Besides, it is not the first time ATI made paper launches.
The revision released two months after the initial release sounds especially suspicious.
Perhaps it will have to face 8850 or 8900 by the time it actually hits the market in sufficient volume.




RE: Great...if it is true
By Targon on 12/31/2006 8:12:48 AM , Rating: 2
If the difference is only going from GDDR 3 to GDDR 4 in terms of CARDS, and the GPU itself can handle both types of memory, that would explain why they can release a new batch of cards with improved performance so quickly.

As for paper launches, there have been some changes in the industry. I don't think AMD would let the graphics card division paper-launch anything so soon after the merger was completed.

Delays in release that are caused by bugs in the microcode are a good reason, and microcode bugs would be hard to track down. How many transistors are we looking at these days?


LOL @ Lvl. 505 Bandwidth bill for this month
By Warren21 on 12/30/2006 10:40:29 PM , Rating: 2
Haha, within minutes of this being posted I'm trying to look and I get a 503 error. Too many people trying to access it!

I did manage to check the Doom 3 bench and it looks very promising, albeit an unofficial bench. X2800 XT (GDDR3) anyone? X2800 XTX w/ GDDR4 on the way, too!




By EglsFly on 12/31/2006 11:18:42 AM , Rating: 2
Getting a 503 error too.
"The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later."

If you wait and retry, sometimes it will load the next page though...


Lots of assumptions
By Domicinator on 12/31/2006 12:45:04 AM , Rating: 1
Nvidia is not due to refresh the 8800 cards in January. They are, however, expected to release their mid-range series 8 cards. I believe there were supposed to be 3 models coming out. I doubt we will see an 8900 model until summer or maybe right before summer. But the mid-line is what will combat the new R600, not the 8800s.

Secondly, maybe I'm nuts, but from the benchmarks (if they're even real), I don't see an advantage to buying the R600. Assuming all these specs are legit, it's going to eat up WAY more power than a series 8, and it's not blowing the 8800GTX away on most of those charts. I think Doom 3 was the the biggest difference in performance. And the fact that two 1950s in Crossfire seem to beat one R600 in a lot of cases is not going to help ATI's case either.

I'm not a fanboy when it comes to hardware. I'll buy the best I can afford regardless of brand. I happen to have purchased an 8800GTS on launch day because I was so desperate to get a new card, but I really don't care that the R600 beats the G80 by one frame per second on CoD2 that's not worth a new power supply and a case hack to me.




RE: Lots of assumptions
By Domicinator on 12/31/2006 12:53:00 AM , Rating: 1
Sorry, I just have to add this. I have read this article over and over and over. This has got to be a hoax. If ATI/AMD was going to release an article under the guise of a tech review, don't you think they'd want to make sure it doesn't read as if an 11 year old wrote it?

The comment about the G90 and R800 coming out in late 07 is a dead giveaway. Neither company releases completely new chip architectures twice in one year. EVER. These chips take a long time to develop. Don't any of you guys find this a little fishy?


By KristopherKubicki (blog) on 12/31/2006 1:04:16 AM , Rating: 2
Just to add: some of the release dates don't line up with the roadmaps we've seen, but the specs do.


WEIRD
By zachO on 1/1/2007 8:44:52 AM , Rating: 2
bleah...reread the article and see that they tested in vista.

weird.

straight from nVidia:

"We recognize that many GeForce 8800 customers are eager to use their new hardware with Windows Vista. We've received numerous questions from users, and we've seen your comments and complaints on the forums.

Due to rigorous testing requirements to ensure stability and optimal performance, Vista drivers for GeForce 8800 are still in QA. Our driver team and test labs must ensure compatibility across multiple platforms and hundreds of applications, a process that takes several weeks to complete. Please keep in mind that Windows Vista will not be available to end-users until the end of January. We'd like to assure you that Vista drivers for the GeForce 8800 will be available to download when Vista ships to end users at the end of January."




RE: WEIRD
By nevdawg on 1/1/2007 11:12:40 AM , Rating: 2
Actually, in Page 2 of the article they say, "The system described above is running a clean install of Windows XP Professional 32-bit for every graphics card test. We tested each card, then exchanged and cleaned up the system to make sure the results are unbiased from previous installations."


Lotta talk really
By FXi on 1/1/07, Rating: 0
RE: Lotta talk really
By TSS on 1/7/2007 8:33:02 PM , Rating: 2
just out of boredom i started comparising the numbers on this site now that its been "updated" or atleast part 2 is there, and i found something amiss.

just out of curisoity i looked up the anandtech.com review of the 8800gtx, check out if the numbers compare a little. now the testing rig of level 505 is better (higher clocked, raid disks) than the rig for the 8800gtx on anand. both numbers come from 1600x1200 4xaa 16xaf. same for the ati crossfire setup, since 505 didnt 8use a single 1950xt i'll leave that out. also oblivion was done on both sides but different settings.

fear test:
anandtech 8800GTX: 84
level 505 8800GTX: 75,9
anandtech X1950XT CF: 94
level 505 X1950XT CF: 92,6

battlefield 2:
anandtech 8800GTX: 142,7
level 505 8800GTX: 148,9
anandtech X1950XT CF: 134,1
level 505 X1950XT CF: 114,1

sources:
http://level505.com/2006/12/30/the-full-ati-r600-t...
http://anandtech.com/video/showdoc.aspx?i=2870&p=2...

now knowing that level 505 has the better rig and more up-to-date drivers theres no way theres such a gap between reviews in fear. nor is there an explanation, why, if in fear the card peforms much worse then the anandtech sample, it blows away anand's 8800gtx in BF2. that just makes no sense. and to me, the problems they had with BF2 running properly on a 1950xt CF (20 fps less then anand, cmon!) with the fact that they say themselves they "rushed it" just takes away all their credebility.


RE: Lotta talk really
By Pythias on 1/9/2007 1:25:38 PM , Rating: 2
quote:
just out of boredom i started compairing the numbers on this site now that its been "updated" or at least part 2 is there, and i found something amiss.


Yeah, HardOCP reviewed and 8800gtx on 1-1-2007 and their oblivion numbers where way higher than what is recorded on this site (level505).


It's dead, Jim
By TheMold on 1/2/2007 10:43:54 AM , Rating: 2
And such a short life span.




RE: It's dead, Jim
By TheMold on 1/2/2007 11:29:22 AM , Rating: 2
Ok, it lives again.


These looks like a hoax
By sviola on 1/1/2007 7:40:52 PM , Rating: 1
Sorry to tell you all this, but this seems to be a hoax.

First, about two issues:

"Hardware DVI-HDCP support (High Definition Copy Protocol)
230W TDP PCI-SIG compliant"


Kristopher, HDCP means High-Bandwidth Digital Content Protection.
You should be more careful when copying and pasting stuff from sites.
And the PCI-SIG compliance is 225W and 300W, not 230W.

On the rest of the article, most of the benchmanrks preseted on that website for the 8800 gtx is way lower than in most respectable sites, so this gives you something to think about (most sites have higher numbers in higher resolutions and same graphics quality).
And also, the site has been updated (home and about links were taken offf the air and looks have been changed), since people started pointing out the issues on the article around the web.




RE: These looks like a hoax
By sanctus on 1/5/2007 1:48:59 PM , Rating: 2
Interesting debate. Me? I'll just wait until I have the card. I'm funny that way. Have fun.


By Assimilator87 on 12/30/2006 8:00:20 PM , Rating: 2
If nVidia can make an 80nm revision of G80 in time for R600's launch, they may actually be able to compete pretty well. If not, hopefully they'll undercut AMD's pricing and everyone can have cheaper DX10 compatibility.




Exciting stuff...
By Furen on 12/30/2006 8:58:30 PM , Rating: 2
Now just ship a part with 16 SIMD shader units (256MB of 900MHz memory @ 128-bits) for $150ish and I'll buy it.

I'm glad ATI will finally rival Nvidia's OpenGL performance and (hopefully) will do much better on the Crossfire side of things. Current Crossfire scaling is a joke right now. A shame about BF2's AF/AA quality problems, but hopefully this will be resolved before release or soon after. The Quad-DVI thing is also very nice for its intended market. I'd love to see just how good this thing is as a workstation GPU, maybe ATI will finally put a chink on Nvidia's Quadro armor...

I'm one of those that never doubted for one second that ATI's part would beat Nvidia's (as these two companies are consistently trading places at the top), I'm just wondering if a simple memory clock increase (and a small core clock hike, no doubt) will be enough to blunt Nvidia's G80 refresh. Of course, the fact that THERE WILL BE SOMETHING to launch at the same time as Nvidia refreshes its G80 is good, maybe ATI will stop falling behind from now own...




Is it true ??
By Xajel on 12/31/2006 2:01:11 AM , Rating: 2
According to Level505, the procesing power is 105GSO/s ( Giga Shader Operation per second )

and the R600 can do 128 shader operations per second

so lets calculate it's clock speed, wich is 105000000000 / 128 = 820312500 Hz = 820MHz !!




Competition
By alpha736 on 1/1/2007 1:00:09 PM , Rating: 2
I'm just happy to see the ATi releasing something... This should drive nVidia's prices down a bit.




overclocking
By jackalsmith on 1/2/2007 10:18:19 AM , Rating: 2
If you had a r600 in your hands wouldn't you be a little curious on how it overclocks?




GeForce 8?
By derdon on 1/2/2007 4:50:59 PM , Rating: 2
Wow a lot of time passed. I can still remember how people with a GF2 anticipated the awesome and great powers of a coming GeForce 6, which at that time no one was able to imagine. Now there is a GeForce 8 already...

I still have my GeForce 4 TI 4200 64MB, that I upgraded after a GeForce 256 32MB, that I upgraded after an ATI Xpert 2000 16MB:-)
Oh yes and I do have a Powercolor Voodoo2 with 12 MB RAM! Cool stuff, who'd believe that I played Half-Life (the original) with it on a Pentium 200 with 64MB RAM.




let me clarify things a little
By slickr on 1/2/07, Rating: 0
By Warren21 on 1/3/2007 12:23:50 AM , Rating: 2
Thank you Dave Orton for shining your beam of light on us all.

/end sarcasm.


24"+ and 30" LCDs
By VooDooAddict on 1/9/2007 3:21:42 PM , Rating: 1
Anyone else starting to eye the 24+ 1920x1200 and 30" 2560x1600 monitors now that you can run them with a single graphics card?

I'm particularly eying the 2560x1600's because they should also scale a 1280x800 gaming rez perfectly to get max frames per second for some competitive online gaming. ... if only the budget allowed for it.




"If they're going to pirate somebody, we want it to be us rather than somebody else." -- Microsoft Business Group President Jeff Raikes











botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki