Print 130 comment(s) - last by scrapsma54.. on Oct 30 at 5:32 PM

DirectX 10 compliant GeForce 8800GTX and 8800GTS headed your way

DailyTech's hands-on with the GeForce 8800 series continues with more information about the GPU and the retail boards. The new NVIDIA graphics architecture will be fully compatible with Microsoft’s upcoming DirectX 10 API with support for shader model 4.0, and represents the company's 8th generation GPU in the GeForce family.

NVIDIA has code-named G80 based products as the GeForce 8800 series. While the 7900 and 7800 series launched with GT and GTX suffixes, G80 will do away with the GT suffix. Instead, NVIDIA has revived the GTS suffix for its second fastest graphics product—a suffix that hasn’t been used since the GeForce 2 days.

NVIDIA’s GeForce 8800GTX will be the flagship product. The core clock will be factory clocked at 575 MHz. All GeForce 8800GTX cards will be equipped with 768MB of GDDR3 memory, to be clocked at 900 MHz. The GeForce 8800GTX  will also have a 384-bit memory interface and deliver 86GB/second of memory bandwidth. GeForce 8800GTX graphics cards are equipped with 128 unified shaders clocked at 1350 MHz. The theoretical texture fill-rate is around 38.4 billion pixels per second.

Slotted right below the GeForce 8800GTX is the slightly cut-down GeForce 8800GTS. These graphics cards will have a G80 GPU clocked at a slower 500 MHz. The memory configuration for GeForce 8800GTS cards slightly differ from the GeForce 8800GTX. GeForce 8800GTS cards will be equipped with 640MB of GDDR3 graphics memory clocked at 900 MHz. The memory interface is reduced to 320-bit and overall memory bandwidth is 64GB/second. There will be fewer unified shaders with GeForce 8800GTS graphics cards. 96 unified shaders clocked at 1200 MHz are available on GeForce 8800GTS graphics cards.

Additionally GeForce 8800GTX and 8800GTS products are HDCP compliant with support for dual dual-link DVI, VIVO and HDTV outputs. All cards will have dual-slot coolers too.  Expect GeForce 8800GTX and 8800GTS products to launch the second week of November 2006. This will be a hard launch as most manufacturers should have boards ready now.

Power requirements for the G80 were detailed in an earlier DailyTech article.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: strange numbers
By Furen on 10/5/2006 2:12:09 AM , Rating: -1
Refreshing but not very useful. I think developers will continue aiming at 128/256/512MB and 1GB vRAM sizes, since those are the conventional memory configurations. I hope there is a good reason why Nvidia decide to go for this ( maybe it needs two channels dedicated to something specific, for example) and not just a way to cover up not having GDDR4 support built into its memory controllers.

Since even current ATI video cards DO support GDDR4 I believe ATI will be sticking to 256bit memory width on GDDR4 for its high-end part, so I doubt Nvidia's bandwidth advantage will be all that great, if it exists at all.

RE: strange numbers
By otispunkmeyer on 10/5/2006 3:50:41 AM , Rating: 4
im willing to bet that GDDR4 support is in there somewhere, probably saving it for a refresh product. ala X1950XTX

the gddr3 is only 100mhz slower than the stuff the X1950 uses, and being on a wider bus just makes it better.

but yeah the memory sizes are kinda odd. from some of the rumours ive read my shot in the dark is that the memory is split up for different uses. ie 512mb on 256bit bus used as you would expect, with the other 128/256mb on 64/128bit used for something else

who knows though, they've been super tight lipped about this card. just have to wait and see

RE: strange numbers
By Bladen on 10/5/2006 4:53:06 AM , Rating: 1
Are you hinting at physics processing?

RE: strange numbers
By Bladen on 10/5/2006 4:57:55 AM , Rating: 3
RE: strange numbers
By defter on 10/5/2006 5:04:27 AM , Rating: 3
but yeah the memory sizes are kinda odd.

There is nothing odd with memory sizes. Maximum width of a single DDR memory chip is 32 bit. With 384bit memory bus one would need 12 memory chips. It is impossible to achieve 512MB or 1024MB memory capacity with 12 equal sized memory chips.

RE: strange numbers
By Korvon on 10/5/2006 11:16:00 AM , Rating: 2
Two differen memory sets? One 512MB with 256bit interface /w one 256MB with a 128bit interface = 768MB /w 320bit interface.

RE: strange numbers
By Tyler 86 on 10/6/2006 2:40:47 AM , Rating: 3

640MB total -> 320b width @ 32b width per 20x 32MB chips
640MB total -> 320b width @ 64b width per 10x 64MB chips <--

768MB total -> 384b width @ 32b width per 24x 32MB chips
768MB total -> 384b width @ 64b width per 12x 64MB chips <--

RE: strange numbers
By Tyler 86 on 10/6/2006 2:55:05 AM , Rating: 2
... on bit width; [32b x2] -> 64b (DoubleDataRate) per 10x 64MB...

RE: strange numbers
By defter on 10/6/2006 4:09:11 AM , Rating: 3
One 512MB with 256bit interface /w one 256MB with a 128bit interface = 768MB /w 320bit interface.

That wouldn't make any sense. You need to have same memory capacity/channel in order to use effectively memory bandwidth. For the same reason performance will suffer if you use one 512MB and one 1GB DIMMs with a dual channel motherboard.

RE: strange numbers
By Furen on 10/5/2006 5:28:50 PM , Rating: 3
Yeah, this GDDR3 is high-end stuff, though, and the 1GHz GDDR4 is the low-end stuff. Samsung claims it can produce 1.6GHz GDDR4, granted, this is likely the extreme high-end but GDDR4 would allow you to get close the 80GB/s bandwidth on a 256 bit bus, which makes PCB production a hell of a lot cheaper. Of course, it could be that Nvidia actually needs lower-latency GDDR3 to achieve its performance goals, in which case going with 6 channels makes sense.

RE: strange numbers
By coldpower27 on 10/5/2006 10:38:01 PM , Rating: 2
Well this is just another method instead of going to another memory type, like GDDR4. It acomplishes the same goals as moving to GDDR4 which is to provide more memory bnadwidth in the end.

If ATI indeed stays back on a 256Bit Memory Interface they would require 2.7GHZ GDDR4 to match Nvidia memory bandwidth levels. Which is possible in GDDR4's lifetime, but that would like require the 2.8GHZ grade GDDR4, which to my knowledge isn't a shipping product at this time.

Given that the current rumors point to ATI having a 500+ Million Transistor product on the 80nm process a 384Bit Interface is not the way to go given the reduced die size. So they opted for memory with higher clocks.

There is nothing inherently wrong perse with 640MB or 768MB of RAM, these levels can be achieved and were ahcieved on main memory. The lowest denominator for Nvidia seems to currently be 64MB chips.

Given that Nvidia has had a 6 Quad Architecture with G70, and ATi has had a 3:1 Ratio for Pixel Shader to TMU's I don't see a problem with 6 x 64Bit Memory Controllers.

As well who knows since they increase the memory bnadwidth level to the max they can with GDDR3 bascially the next step is GDDR4.

"Well, there may be a reason why they call them 'Mac' trucks! Windows machines will not be trucks." -- Microsoft CEO Steve Ballmer
Related Articles
Power and the NVIDIA "G80"
October 4, 2006, 11:56 PM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki