backtop


Print 130 comment(s) - last by scrapsma54.. on Oct 30 at 5:32 PM

DirectX 10 compliant GeForce 8800GTX and 8800GTS headed your way

DailyTech's hands-on with the GeForce 8800 series continues with more information about the GPU and the retail boards. The new NVIDIA graphics architecture will be fully compatible with Microsoft’s upcoming DirectX 10 API with support for shader model 4.0, and represents the company's 8th generation GPU in the GeForce family.

NVIDIA has code-named G80 based products as the GeForce 8800 series. While the 7900 and 7800 series launched with GT and GTX suffixes, G80 will do away with the GT suffix. Instead, NVIDIA has revived the GTS suffix for its second fastest graphics product—a suffix that hasn’t been used since the GeForce 2 days.

NVIDIA’s GeForce 8800GTX will be the flagship product. The core clock will be factory clocked at 575 MHz. All GeForce 8800GTX cards will be equipped with 768MB of GDDR3 memory, to be clocked at 900 MHz. The GeForce 8800GTX  will also have a 384-bit memory interface and deliver 86GB/second of memory bandwidth. GeForce 8800GTX graphics cards are equipped with 128 unified shaders clocked at 1350 MHz. The theoretical texture fill-rate is around 38.4 billion pixels per second.

Slotted right below the GeForce 8800GTX is the slightly cut-down GeForce 8800GTS. These graphics cards will have a G80 GPU clocked at a slower 500 MHz. The memory configuration for GeForce 8800GTS cards slightly differ from the GeForce 8800GTX. GeForce 8800GTS cards will be equipped with 640MB of GDDR3 graphics memory clocked at 900 MHz. The memory interface is reduced to 320-bit and overall memory bandwidth is 64GB/second. There will be fewer unified shaders with GeForce 8800GTS graphics cards. 96 unified shaders clocked at 1200 MHz are available on GeForce 8800GTS graphics cards.

Additionally GeForce 8800GTX and 8800GTS products are HDCP compliant with support for dual dual-link DVI, VIVO and HDTV outputs. All cards will have dual-slot coolers too.  Expect GeForce 8800GTX and 8800GTS products to launch the second week of November 2006. This will be a hard launch as most manufacturers should have boards ready now.

Power requirements for the G80 were detailed in an earlier DailyTech article.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

strange numbers
By Loser on 10/5/2006 1:34:10 AM , Rating: 2
320 bit? 640 mb
heh refreshing from the usual 128 256 512 1024 :D




RE: strange numbers
By Furen on 10/5/06, Rating: -1
RE: strange numbers
By otispunkmeyer on 10/5/2006 3:50:41 AM , Rating: 4
im willing to bet that GDDR4 support is in there somewhere, probably saving it for a refresh product. ala X1950XTX

the gddr3 is only 100mhz slower than the stuff the X1950 uses, and being on a wider bus just makes it better.

but yeah the memory sizes are kinda odd. from some of the rumours ive read my shot in the dark is that the memory is split up for different uses. ie 512mb on 256bit bus used as you would expect, with the other 128/256mb on 64/128bit used for something else

who knows though, they've been super tight lipped about this card. just have to wait and see


RE: strange numbers
By Bladen on 10/5/2006 4:53:06 AM , Rating: 1
Are you hinting at physics processing?


RE: strange numbers
By Bladen on 10/5/2006 4:57:55 AM , Rating: 3
RE: strange numbers
By defter on 10/5/2006 5:04:27 AM , Rating: 3
quote:
but yeah the memory sizes are kinda odd.


There is nothing odd with memory sizes. Maximum width of a single DDR memory chip is 32 bit. With 384bit memory bus one would need 12 memory chips. It is impossible to achieve 512MB or 1024MB memory capacity with 12 equal sized memory chips.


RE: strange numbers
By Korvon on 10/5/2006 11:16:00 AM , Rating: 2
Two differen memory sets? One 512MB with 256bit interface /w one 256MB with a 128bit interface = 768MB /w 320bit interface.


RE: strange numbers
By Tyler 86 on 10/6/2006 2:40:47 AM , Rating: 3
Possibilities;

640MB total -> 320b width @ 32b width per 20x 32MB chips
640MB total -> 320b width @ 64b width per 10x 64MB chips <--

768MB total -> 384b width @ 32b width per 24x 32MB chips
768MB total -> 384b width @ 64b width per 12x 64MB chips <--


RE: strange numbers
By Tyler 86 on 10/6/2006 2:55:05 AM , Rating: 2
... on bit width; [32b x2] -> 64b (DoubleDataRate) per 10x 64MB...


RE: strange numbers
By defter on 10/6/2006 4:09:11 AM , Rating: 3
quote:
One 512MB with 256bit interface /w one 256MB with a 128bit interface = 768MB /w 320bit interface.


That wouldn't make any sense. You need to have same memory capacity/channel in order to use effectively memory bandwidth. For the same reason performance will suffer if you use one 512MB and one 1GB DIMMs with a dual channel motherboard.


RE: strange numbers
By Furen on 10/5/2006 5:28:50 PM , Rating: 3
Yeah, this GDDR3 is high-end stuff, though, and the 1GHz GDDR4 is the low-end stuff. Samsung claims it can produce 1.6GHz GDDR4, granted, this is likely the extreme high-end but GDDR4 would allow you to get close the 80GB/s bandwidth on a 256 bit bus, which makes PCB production a hell of a lot cheaper. Of course, it could be that Nvidia actually needs lower-latency GDDR3 to achieve its performance goals, in which case going with 6 channels makes sense.


RE: strange numbers
By coldpower27 on 10/5/2006 10:38:01 PM , Rating: 2
Well this is just another method instead of going to another memory type, like GDDR4. It acomplishes the same goals as moving to GDDR4 which is to provide more memory bnadwidth in the end.

If ATI indeed stays back on a 256Bit Memory Interface they would require 2.7GHZ GDDR4 to match Nvidia memory bandwidth levels. Which is possible in GDDR4's lifetime, but that would like require the 2.8GHZ grade GDDR4, which to my knowledge isn't a shipping product at this time.

Given that the current rumors point to ATI having a 500+ Million Transistor product on the 80nm process a 384Bit Interface is not the way to go given the reduced die size. So they opted for memory with higher clocks.

There is nothing inherently wrong perse with 640MB or 768MB of RAM, these levels can be achieved and were ahcieved on main memory. The lowest denominator for Nvidia seems to currently be 64MB chips.

Given that Nvidia has had a 6 Quad Architecture with G70, and ATi has had a 3:1 Ratio for Pixel Shader to TMU's I don't see a problem with 6 x 64Bit Memory Controllers.

As well who knows since they increase the memory bnadwidth level to the max they can with GDDR3 bascially the next step is GDDR4.


Typo
By Chillin1248 on 10/5/2006 1:02:36 AM , Rating: 4
This has to be:

quote:
GeForce 8800GTX graphics cards are equipped with 128 unified shaders clocked at 1350 MHz .


Something about this sentence just doesn't add up. Because if this is true, then dear god.

-------
Chillin




RE: Typo
By KristopherKubicki (blog) on 10/5/2006 1:05:16 AM , Rating: 3
NVIDIA defines the 8800GTX with 128 "stream processors." Each shader has a clock of 1350MHz, but the core clock is 575MHz. We will detail more on the architecture later today.


RE: Typo
By soydios on 10/5/2006 1:30:25 AM , Rating: 3
128 individual shaders, or even partial shaders, is still impressive. We're awaiting more details on this with bated breath.

Thanks for the info.


RE: Typo
By therealnickdanger on 10/5/2006 8:22:46 AM , Rating: 2
Yeah, this seems to be too amazing for my brain this early in the morning. I knew DX10 would bring some interesting improvements, but this is nucking futs! I am salivating, I'm not saying it to be funny. I actually dribbled on myself. I can't wait to see benchies along with the new R600. As someone else said below, I'll probably wait for a die shrink refresh before buying one...

Awesome.


RE: Typo
By Sunbird on 10/5/2006 2:40:25 AM , Rating: 1
So I guess we will find out about the number of pixel pipes later on?


RE: Typo
By Shining Arcanine on 10/5/2006 4:59:54 PM , Rating: 2
There are no pixel pipelines. DirectX 10 graphics cards only have unified shader pipelines, which do pixel, vertex and geometry processing, in whatever proportion is required by a game on demand.


RE: Typo
By oddity21 on 10/5/2006 4:01:31 AM , Rating: 2
Can't wait for the architecture details. The specs posted here show that these cards are unlike anything we've ever seen.


RE: Typo
By Rolphus on 10/5/2006 4:49:05 AM , Rating: 4
I'm guessing (and I could well be wrong), that 128 stream processors means it can process 32 vec4, or 43-ish (well, 42 and two-thirds) vec3 instructions per clock. That would make sense to me in terms of not insanely increasing transistor count but still providing a huge boost in the flexibility of the pipeline itself.

Interesting if that's the case - moving away from an SIMD sort of approach isn't something I expected.


RE: Typo
By Narutoyasha76 on 10/5/06, Rating: 0
RE: Typo
By akugami on 10/5/06, Rating: 0
RE: Typo
By wingless on 10/5/06, Rating: 0
RE: Typo
By GhandiInstinct on 10/5/2006 3:56:47 PM , Rating: 2
Yes, dear god, Al Gore's new campaign slogan, "Stop Nvidia and it's G80 assault on our planet!"


RE: Typo
By theteamaqua on 10/5/2006 6:41:56 PM , Rating: 1
The core clock will be factory clocked at 575 MHz. GeForce 8800GTX graphics cards are equipped with 128 unified shaders clocked at 1350 MHz.

i find these 2 sentecne weird, i dunno. Isnt shaders within the core?? so shouldnt it have the same frequency as core??


RE: Typo
By Tyler 86 on 10/6/2006 2:32:44 AM , Rating: 2
The generic framebuffer, 2D, console, and other non-3D, or low priority 3D rendering functions, and the primary memory controller (which probably has it's own clock speed, or performs multiple operations per clock to compensate for clock speed), are probably in the core... and tacked onto or into that, the shaders sit on a seperate clock... maybe they have their own memory controllers... too soon to tell?


Tough times for ATI
By yusuf on 10/5/2006 2:00:40 AM , Rating: 2
if indeed ATI are going to be 3 months late till they launch the R600, then they will be slaughtered by nvidia for another quarter.
maybe nvidia will turn out to be the sole high-end graphics provider in the industry after this AMD-ATI merger(ala 3dfx).




RE: Tough times for ATI
By Mortal on 10/5/06, Rating: 0
RE: Tough times for ATI
By Sharky974 on 10/5/06, Rating: 0
RE: Tough times for ATI
By Sharky974 on 10/5/06, Rating: 0
RE: Tough times for ATI
By Sharky974 on 10/5/2006 4:48:29 AM , Rating: 2
This makes sense of "stream processor" terminology as well.


RE: Tough times for ATI
By S3anister on 10/5/2006 2:48:44 PM , Rating: 3
3 -1s in a row lol.

i don't think i've seen that until now on DT.

and I don't really think it will be hard times for ATi, they'll most likely just work a little harder, a litle longer to get something to beat nVidia. but then again nVidia will have market share for DX10 cards.... for a little while.


RE: Tough times for ATI
By Mortal on 10/5/2006 5:45:47 PM , Rating: 2
Don't really see why mine is at -1 yet the original comment isn't, considering it's a completely ludicrous idea. Oh well.

Anyone following the graphics industry can see that discrete video cards could go the way of the dodo if Intel and AMD push with their new motherboard transport-link architectures.


RE: Tough times for ATI
By Sharky974 on 10/5/06, Rating: -1
RE: Tough times for ATI
By skroh on 10/5/2006 8:58:53 PM , Rating: 1
I'm not sure why "nVidia fanbots" would downrate your post, given that the last two lines of the quote, if read carefully, imply that the G80 design is superior and will scale better going forward. Or didn't you notice that?


RE: Tough times for ATI
By akugami on 10/5/06, Rating: 0
RE: Tough times for ATI
By Chillin1248 on 10/6/2006 1:47:57 AM , Rating: 3
8800 GTS will be $449-$499 MSRP.
8800 GTX will be $649 MSRP

-------
Chillin


RE: Tough times for ATI
By Tyler 86 on 10/6/2006 2:46:52 AM , Rating: 3
Great googly moogly!


RE: Tough times for ATI
By Sharky974 on 10/12/2006 7:33:34 AM , Rating: 1
How many? Probably as many times as I get modded down.


Here is what he said (in the know)


quote:
speaking for DX10 ASIC such as G80 per se . with numerous arrays of MIMD 1D ALU which could

be seen as

128X1(MIMD 1D)X2(Double-Clock)=256

R600

(4+1D) 5X64=320

besides R600 has better raw performance whilst G80 have better arrays of ALU suffice for

better utilization hence optimal future.



I will repost this everytime I am modded down by nvidia fanbots.

I hate the modding systems on boards. It's rule of the ignorant masses.


By KristopherKubicki (blog) on 10/13/2006 4:13:58 PM , Rating: 2
Sharky,

Although I do not think you deserve to be modded down, you're clearly violating the intended use of the comment system. If you keep it up you're going to take a vacation.

Kristopher


RE: Tough times for ATI
By Sharky974 on 10/7/06, Rating: -1
RE: Tough times for ATI
By Chillin1248 on 10/7/2006 8:25:40 AM , Rating: 3
How many times Mr. Sharky are you going to post the same statement which ironically has no significance to the situation at hand?

Not that I know enough about the R600 or even G80 architecture to agree or disagree with your statement, but just cool it down on the last sentence will you?

-------
Chillin


RE: Tough times for ATI
By Scrogneugneu on 10/8/2006 3:51:51 AM , Rating: 2
I would say that continually reposting the same message that was modded down is a kind of trick to get around the system, therefore it should be considered illegal.

Hence, you should get a warning, and a ban on failure to stop.


RE: Tough times for ATI
By NullSubroutine on 10/20/2006 8:42:06 AM , Rating: 1
down with the system!


PS3Xbox 360 outdated already
By Xavian on 10/5/2006 8:58:30 AM , Rating: 3
And once again consoles get overtaken in technology by PC's, the 8800GTX/8800GTS will blow the RSX (Xenon too) out of the water.




RE: PS3Xbox 360 outdated already
By drebo on 10/5/2006 10:43:24 AM , Rating: 5
What did you expect?

Your little $400 box of crap to surpass systems people spend several thousands of dollars on?

Consoles and PCs are two entirely different things, and they were never meant to be compared directly. Something Sony obviously hasn't figured out yet.


RE: PS3Xbox 360 outdated already
By Xavian on 10/5/2006 1:23:52 PM , Rating: 2
Whoa, a little agressive there. Im agreeing with your point, as i was trying to say Microsoft and Sony shouldn't really hype-up 'better than PC' graphics, because we all know cards like this come out soon after the console and knock the puny consoles for six, not to mention that in only 6-10 months after the launch your can bet these graphics cards will drop in price considerably.


RE: PS3Xbox 360 outdated already
By Sharky974 on 10/5/2006 6:08:04 PM , Rating: 3
AS I posted on Hard

At least Xbox360 is a year old..what's PS3's excuse??!! It's not even freaking out yet.

BTW Xenos (and RSX/G70 really) doesn't get trounced as bad as you think if my theory is correct. The 128 ALU's in G80 only process one component per clock, the Xenos ALU's can process 5. Of course, running at 1350 mhz and being 128 of them, yeah G80 is still going to pound those into rubble, but what idiot didn't expect that.

Basically everybody seems to be looking at these 128 units as shader pipelines or something..which they aren't. They're more like mini-shaders to coin a useless generic term (just like the useless description of R580 or Xenos as having 48 "shaders").


RE: PS3Xbox 360 outdated already
By Xavian on 10/8/2006 11:24:37 PM , Rating: 3
Well, think of this Xenon has 48 shader processors, the G80 has 128 shader processors, the Xenon has only 16 'real' shader pixel pipelines (even though 'pipelines' mean almost nothing nowadays), the 7800 series/G70 (aka RSX) is already an outdated design.

Just looking at the sheer memory bandwidth alone allocated to this card means its going to be a monster. Basically, what im saying is, console makers should NEVER attempt to compete with PC's in graphics power and versatility, because

A) they'll never have better graphics in the long run
B) graphics cards will come out every year (or 1/2 year) that will trounce the console before its even released (let alone at the end of its lifespan)
C) Consoles will never be versatile, with DRM embedded everywhere on a propietary OS, with propietary API's with propietary hardware.

Consoles should stick to what made them successful in the first place and that is playing games that are often too clumsy to be played on the PC; and playing those games cheaply.


why odd?
By ForumMaster on 10/5/2006 1:15:00 AM , Rating: 2
why is there an odd amount of memory? aren't equal amounts better? 1GB and a 512bit link? wonder who's going to buy these monsters.




RE: why odd?
By Knish on 10/5/2006 1:19:00 AM , Rating: 1
It has to do with the specs for GDDR4


RE: why odd?
By Doormat on 10/5/2006 1:34:28 AM , Rating: 4
768MB is a round amount - for a 384b memory interface thats six 64b channels, each channel addressing 128MB of RAM. The whole six channels thing is kinda odd but they needed more bandwidth but a 512b memory interface is too expensive.


RE: why odd?
By coldpower27 on 10/5/2006 8:37:23 AM , Rating: 2
It's the first time in awhile that we have had, something that isn't 2 to the power of x. Remember when we had 24Bit precision of Pixel Shader 2.0, on the R300 Series, well same kinda thing applies here, it's doable, just not mathmematically as elegant perse.


RE: why odd?
By Lonyo on 10/5/2006 10:59:14 AM , Rating: 2
XGI were going to make a 192-bit product for mainstream/low end market sectors, but it got cancelled.
It's no suprise that we are seeing an odd number of bits, it's been (IMO) a long time coming, although I expect it to be seen first in lower end/mid range cards.


Woah.
By splines on 10/5/2006 1:02:41 AM , Rating: 2
That's a huge ramp-up in memory. Expecting huge textures in DX10 games, maybe?




RE: Woah.
By Chillin1248 on 10/5/2006 1:05:05 AM , Rating: 3
Perhaps a abundance of GDDR3 memory could be the reason, though I would think that GDDR4 would cost around the same using the 80nm process.


-------
Chillin


RE: Woah.
By ToySoldier on 10/5/06, Rating: 0
RE: Woah.
By gersson on 10/5/06, Rating: 0
RE: Woah.
By shamgar03 on 10/5/2006 8:38:55 AM , Rating: 3
What you think they won't make a G80 for laptops?


RE: Woah.
By VooDooAddict on 10/6/2006 11:39:13 AM , Rating: 2
I'm sure a nice 7900GS in a laptop could do you for NWN2.


holy crap
By Samus on 10/5/2006 2:20:38 AM , Rating: 2
while reading that i thought this was some sort of joke...like someone saying intel releases 10ghz quad core processor, to be on sale in november, and will be six-times faster than amd's fastest processor.

because thats how it reads. these cards are nuts. talk about redefining the wheel...




RE: holy crap
By AnotherGuy on 10/5/2006 2:41:38 AM , Rating: 2
I dont get what 128 shaders means? Like we used to talk about pixel pipelines and now theres no mention of those... ? anything im missing here? dont tell me there gonna be 128 pixel pipelines in there..


RE: holy crap
By arturnowp on 10/5/2006 4:11:04 AM , Rating: 2
There's no such thing like pixel pipeline since GF2.


RE: holy crap
By Tyler 86 on 10/6/2006 2:52:23 AM , Rating: 3
Think of 'em like little cores on a processor... Hell, that's what they are, now, come DX10...

When is someone going to write a frickin operating system that runs soley on GPUs? Taking bets... :)


AGP version...
By clayclws on 10/6/2006 5:23:12 AM , Rating: 2
I am not planning on upgrading my whole computer...just need a Direct3D/DirectX 10 card. I guess I will wait till other versions come out and the price war between ATI and NVIDIA ensues.

Now, is there any AGP version? NVIDIA reps, if you are reading this, AGP is still alive...dying soon, though.




RE: AGP version...
By GoatMonkey on 10/6/2006 8:22:02 AM , Rating: 2
I think it would be a good idea also. But there are very high odds that if they do make an AGP version it will be a low end variation on this card. If they could do something to just make it run the DirectX 10 features it could be enough to extend the life of some old computers into running Vista.


RE: AGP version...
By clayclws on 10/6/2006 8:58:09 PM , Rating: 2
I'm satisfied with visuals that are good enough but not mind blowing, like Crysis. Maybe C&C3 is enough. But I just need Direct3D/DirectX 10 support for whatever games/application that will utilize it...so it doesn't matter as long as it has support and good enough raw power.

And...ATI, come on and make an announcement already!!!


RE: AGP version...
By glennpratt on 10/7/2006 9:37:34 PM , Rating: 3
Vista != DirectX 10. Vista will run fine on DX9 cards.


Some explaining
By Rza79 on 10/5/2006 3:23:45 PM , Rating: 5
Some seem to be confused about the 384/320 bit memory interface:
The 7900 has eight 32bit memory controllers resulting in a 256bit bus.
The 8800GTX will have 12 controllers and the GTS 10. You will see 12 memory chips on the GTX and 10 on the GTS.
So it's not 256+128 or 256+64 but 32*12 and 32*10.

Unified shaders:
128 stream processors don't translate to 128 shader pipes. Depending on the shader instruction, one to an unknown number of stream processors will be needed. It will depend on how efficient each of these stream processors are but that's still unknown. You can be sure that it won't be as fast as 128 7900GTX shader pipes, even running at that frequency.

It doesn't say anywhere how many pixel/texel pipes or ROP's it will have.




RE: Some explaining
By Dactyl on 10/5/2006 4:03:11 PM , Rating: 3
You and me both. I still have no idea what a "stream processor" is compared to anything else (I could make an uneducated guess, but that's pretty lame).

I expect THG or Anandtech to come out with an article soon explaining how the G80's architecture is different from everything else. They should explain the stream processors in as much detail as they are allowed.


unfied shaders not that great performance wise
By zenman17 on 10/5/06, Rating: 0
By Chillin1248 on 10/5/2006 5:00:22 PM , Rating: 4
Erm... Unified Shader Architecture has the same power as a comparitve fixed shader. However they can transfer between going to Geometry, Pixel and Vertex depending on programming.

Also where did you find out about the RSX specs, last I heard they were not finalized (all we have seen so far is pre-production units, no clue on final units).

-------
Chillin


By Fenixgoon on 10/5/2006 6:37:06 PM , Rating: 2
since when did he have a ps3 to test against a 360? ROFL.


By Tyler 86 on 10/6/2006 9:50:50 PM , Rating: 2
Yeah, what benchmark is that? The prerendered Killzone scenes?


Typo in GTS memory specs?
By defter on 10/5/2006 5:09:39 AM , Rating: 4
quote:
GeForce 8800GTS cards will be equipped with 640MB of GDDR3 graphics memory clocked at 900 MHz. The memory interface is reduced to 320-bit and overall memory bandwidth is 64GB/second.


320 bit bus @ 900MHz = 72GB/s
320 bit bus @ 800MHz = 64GB/s

I would make sense that lower end GTS model would have 800MHz memory clock (and thus 64GB/s of memory bandwidth).




RE: Typo in GTS memory specs?
By djtodd on 10/5/2006 9:17:39 AM , Rating: 1
The gtx uses 384 bit bus, not 320.


RE: Typo in GTS memory specs?
By johnsonx on 10/5/2006 10:28:55 AM , Rating: 2
Hhe's not talking about the GTX. He's saying the stated bus width times the stated clock of the GTS don't equal the stated bandwidth.


By kilkennycat on 10/5/2006 2:14:07 PM , Rating: 2
Expected, since the Dx10 cards were scheduled to launch in November AND it was expected that one or more of the new family would equal or exceed the performance of the 7950GX2.
Prices of the new cards have not emerged yet, but I would expect the GTS to initially list near the current price of the 7950GX2 ~ $599, with the GTX in the $700-$750 range. Also that the 7950GX2 would likely cease production a couple of months after volume shipment of the 8xxxGTS and GTX.


By Dactyl on 10/5/2006 3:36:50 PM , Rating: 2
No, not expected. ATi's next-gen card is months away, and the rumors leaking out about the G80 suggested it was nothing special (no unified shaders, no DX10, nothing about stream processors). Just about the only "correct" rumors are that it would use 768MB of RAM, and that it would be 90nm.

Nvidia did a great job of keeping information about the G80 secret. THG dropped a bomb on all of us when they broke the story. It's huge.


By Chillin1248 on 10/5/2006 4:56:45 PM , Rating: 3
Actually VR-zone.com leaked the details [about the G80] a while ago, however they were quickly taken down (they also by accident leaked the X1950 XTX scores a day earlier than NDA).

-------
Chillin


8800
By adam92682 on 10/5/2006 1:32:37 PM , Rating: 3
Maybe Nvidia will start using the pro suffix. Then we could have a Nvidia 9800 Pro next year




RE: 8800
By S3anister on 10/5/2006 2:26:44 PM , Rating: 2
oh man that remindes me of the days of my old GeForce 2 pro.


RE: 8800
By granulated on 10/5/2006 4:53:42 PM , Rating: 2
hehe


New AA mode
By sutyi on 10/5/2006 11:59:03 AM , Rating: 2
Hi there. Does anyone have a clue whats the VC part in the new VCAA mode? :)




RE: New AA mode
By thecoolnessrune on 10/6/2006 9:05:13 PM , Rating: 2
Very Cool????


RE: New AA mode
By Tyler 86 on 10/6/2006 10:03:09 PM , Rating: 2
All I've seen is 'Very Correct' and 'Very Cool'...

Verticle... Variable... Versitile... Vector...
Corner... Contrast... Cast... Compound.. Coalescing...

Anyone able nail this one down?


sound a lot more promising than previously
By R3MF on 10/5/2006 7:13:39 AM , Rating: 3
i do prefer a unified approach because i approve of GP-GPU usage, and the more generic the processor the better.

i wonder the if the variable 128MB+64bit vs 256MB+64bit memory that leads to 640MB+320bit vs 768MB+128bit memory is a result of separating IQ fucntions like AA from general resource usage?

with the GTS you get 8x super-duper AA for FREE
with the GTX you get 16x super-duper AA for FREE

doesn't make a lot of sense if its for something like the Geometry Shader portion of the GPU, because you would be basically saying to the GTS user that they should really only consider their new card as a very fast DX9 card, as its a bit crippled in DX10............

regardless, i will await the 0.65u refresh for a less power hungry version.
and hopefully a return to lower clocked but fully functional GT versions as i hate buying something with crippled hardware.




By Tyler 86 on 10/6/2006 9:58:01 PM , Rating: 2
Man, I can't make sense of your numbers...
And I don't think anyone else can either...

It's possibe they've got those odd traces going somewhere other than the core, but my bet's they're all going to the core.
There's no reason why not.

AA isn't free without a dedicated AA unit, and even then, free quality enhancment comes at a price premium.

The 'free' AA of the XBox360 works with a set maximum framebuffer resolutions & perspectives of something like 1080p at 16:10 or 16:9? So, some 'smart' memory is able to perform the antialiasing on the framebuffer within 10MB...
One day that might come over to PCs, but it isn't here yet...


EXPLENATION ABOUT THE SPECS !!!
By East17 on 10/7/2006 3:42:25 PM , Rating: 1
Hellor everybody,

It is quite simple 1350MHz for the shaders because the shading unit will function an a highyer speed than the rest of the GPU .

The thing could be like in the P4 NetBurst architecture where the ALUs were working at twice the speed of the rest of the CPU.

BUT ... since there are 2 BUSes for the memory ... there seems to be one BUS for the shading memory and another for the classic GPU .

What I'm trying to say is that the memory is separated for the GPU and the shading unit .

Maybe the shading unit is not even a part of the GPU ... it's separated on the PCB ... it would be simpler . 128 Bit BUS for the SHADING unit and 256 Bit BUS for the GPU .

So the memory will be separated also : 512 MB for the GPU and 256 MB for the shader processor .

What do you think ?





By scrapsma54 on 10/30/2006 5:08:35 PM , Rating: 2
Its more likely to be a ppu. http://www.dailytech.com/Article.aspx?newsid=4444&...">Quantum
Ati is doing it too.


By scrapsma54 on 10/30/2006 5:32:45 PM , Rating: 2
Then again the shader being a separate unit, It makes lots of sense.


Picture of 8800
By XtAzY on 10/5/2006 10:50:21 AM , Rating: 2
i believe this is the pic to the nvidia 8800gtx
http://www.blogsmithmedia.com/www.engadget.com/med...




RE: Picture of 8800
By GoatMonkey on 10/5/2006 1:31:08 PM , Rating: 2
Looks like the same one digit-life was showing last Friday...

http://www.digit-life.com/news.html?06/92/29#69229



Someone needs a friggin' die shrink, and FAST...
By Enoch2001 on 10/5/2006 2:09:34 PM , Rating: 2
quote:
All cards will have dual-slot coolers too.


Anyone need their hair blow dried????




By Crusader on 10/18/2006 12:11:29 PM , Rating: 2
Yeah, hold on I'll use a X1900..


What about Priciing...
By rushfan2006 on 10/5/2006 2:45:15 PM , Rating: 2
If this article is accurate, these cards are insane...I never thought I'd say this but $500 wouldn't be a bad deal for these monsters if they are around that price point.

Of course I'm ahead of myself -- they could be built like juke, tons of problems with them and over-heat like a mofo....

But anyway ...why do they always leave out pricing in such articles....I mean when doing the research you'd think the folks would at least ask for a "ballpark" or "target" price range.

I'm thinking $600....





RE: What about Priciing...
By coldpower27 on 10/5/2006 8:07:19 PM , Rating: 2
The MSRP we have a ballpark for with the rumored specification VR-Zone posted a while back, now I know what htey mean about "scalable to 1.5GHZ"

Geforce 8800 GTX should be 649US while the 8800 GTS should be 449US. Which isn't bad, depending if their performance is up to snuff.


R600 Pwned
By AggressorPrime on 10/5/2006 4:04:42 PM , Rating: 1
With 2x the shaders as the R600, pwned is the perfect word.




RE: R600 Pwned
By hwhacker on 10/5/2006 7:10:41 PM , Rating: 3
With all due respect; what an ignorant comment.

We *think* we know R600 has 64 pixel arrays, similar to the way R500 (Xenos) and R580 (x1900) have 16. Now, I don't know anything you don't, but I'm taking a shot in the dark assuming each array can process more than one ALU. Nvidia has 128 ALUs total according to these specs; probably 48/64 x 2 full alus . Now...What is that number on R600? If it's 64x2 it has an equal number of of ALUs to G80, and if it's x3, like R580/R500 before it, then R600 could process 192 shader ops. Of course, how it would compare to G80's architecture even with more shading processes would depend on a lot of architectural variables as well as clock speed and overall performance dependent on other specs (raster ops, tmus, bus, memory, etc etc) and I won't even begin to guess who will come out where as not enough information is available...perhaps you should consider that approach yourself. :)


Weird specs...
By Apprentic3 on 10/7/2006 2:43:56 PM , Rating: 2
Not sure if this is the confirmed specs, but it def sounds very weird to me. Wonder if Nvidia is trying to cut cost down by taking the shortcut of introducing such weird specs... 768mb and 640mb of mem. 384bit and 320bit mem bandwidth... I was expecting 512bit mem bandwidth and a min of 512mb or 1gig of graphic mem by this gen. I wonder if ATI will come out with a 512bit mem bandwidth card.




RE: Weird specs...
By saratoga on 10/7/2006 5:44:53 PM , Rating: 2
Each bit you add to the memory requires a little more then 1 pin. At the same time, you need pins for PCI-E, and roughly 1 pin per watt of power you want to consume. Not only is adding pins very expensive, but theres limits to how many you can fit on a given die since they physically take up space. Given the power consumption and memory bus width, Nvidia is probably already at the limit at 384 pins. Then theres the issue of routing 512 bits of traces and the additional addressing and control circuitry.

512 bit memory might be cost effective someday, but probably not for a while. Maybe not ever if higher speed DDR comes out.


Differing clockspeeds a bad thing?
By Goty on 10/7/2006 10:33:52 PM , Rating: 2
Didn't NVIDIA run into a little overclocking trouble with the G71 chips by clocking the shaders higher than the rest of the chip? I understand that they're going to be able to squeeze a little more performance by removing a little of the bottleneck inside the chip, but is that marginal performance increase worth the brief spat of problems they had with the 7900 series?




By hwhacker on 10/7/2006 10:58:45 PM , Rating: 2
They clocked the geometry portion higher than the shaders, actually, in G71.

AFAIK, the problem with the G71 cards were bad pcbs iirc. I don't recall the exact problem, but I believe it was a problem with the voltage regulator or something like that.


Bigger is better??
By Xponential on 10/13/2006 3:52:12 PM , Rating: 2
I'm suprised no one has commented on the actual physical size of this thing. A little less than 11 inches?!?! My current card (6800 GT) is 8.5 inches long and it's only about an inch from the back of the hard drives in my case (Antec P180).

I guess I could just take the upper HD cage out and use the lower one if I needed to. The only other option would be to get a case that has a mobo tray and keep the tray slid out about 3 inches LOL.

Bottom line, this card is definitely not going to fit in most cases.




RE: Bigger is better??
By Crusader on 10/18/2006 12:09:52 PM , Rating: 2
Your card might be 2 inches shorter.. but its performance will be pathetic in comparison.


The real question is...
By Chillin1248 on 10/5/2006 12:50:36 PM , Rating: 3
Whether or not it will play Tetris at 2048X1536. :)

I wonder now if if this card was the one in the Falcon-NW machine demoed at IDF showing Alan Wake.

-------
Chillin




prices?
By crazydrummer4562 on 10/5/2006 5:12:47 PM , Rating: 2
any expected price outlooks on these? I know the GTX version will be ridiculous..but anybody know about the GTS?




RE: prices?
By yacoub on 10/5/2006 6:35:26 PM , Rating: 1
I think I'll wait for the 512-bit cards that will come 6-12 months after the initial line-up, when they've figured out their bit-width and memory sizes and they release amounts that more directly correspond to multiples like 512, 1024, or 2048. Not 320, 384, 640, or 768.

These numbers are 320/384 because their current manufacturing technology limits them to that. Once they get past those limitations you'll see cards sporting 512-bit and 1024MB.

That's when you'll see me investigating new cards further. Not to mention my 7900GT KO runs everything I play fine. ;)


By tygertyger on 10/6/2006 2:20:42 AM , Rating: 3
i wonder what will happen when a plethora of gamers in same town will buy g80's and play online RPGs at the same time. electric failure in town.
in simcity, the mayor would be fired because of electric shortages. looks like hard times are ahead for them.




Too bad it...
By cochy on 10/6/2006 3:28:40 PM , Rating: 1
It seems R600 is only scheduled for release during Q2 07'. This is what I got from another web site. That seems very late to the DX10 party if you ask me. I for one will be upgrading around Vista's release as I am sure a lot of people will also. I predict many sales for NVIDIA.




RE: Too bad it...
By hwhacker on 10/7/2006 10:39:20 PM , Rating: 2
R600 is in the pipe for Q107 per ATi's roadmap. Rumors say probably January; perhaps February. There has been a lot of talk that R600 is tightly coupled to the completion/release of Vista/dx10 and more importantly Crysis to showcase ATi's ability in DX10. Notice it was set for release with Vista ever since it's conception (Nov/Dec 2006) and slipped when it did. If you are waiting for Vista to upgrade, or more importantly DX10 which will come slightly later according to M$, R600 will certainly be an option by that time.

I predict a lot of sales for Nvidia too...but you know what? The GeforceFX sold a lot too. Nvidia's marketing is second to none. It's unfortunate, in my humble opinion, when their engineering is second to one; ATi.


Whoa
By Sharky974 on 10/10/2006 10:46:49 PM , Rating: 1
I just read an insider rumor on R600 memory specs..512 bit bus, 1024 MB 2500 mhz GDDR4! Over 160 GB/s bandwidth!

Looks like Nvidia is fixing to get owned again..




RE: Whoa
By Sea Shadow on 10/27/2006 1:54:06 AM , Rating: 2
That's great and all (if they were true), but it doesn't mean much when they are going to be 6 months late to the party again.


By oddity21 on 10/5/2006 3:49:42 AM , Rating: 2
So are we looking at $700+ retail price for the GTX? These cards, at least on paper, are mind-blowingly powerful.




Maybe Cold Power can come in here
By Sharky974 on 10/5/06, Rating: 0
RE: Maybe Cold Power can come in here
By SilthDraeth on 10/5/2006 11:16:11 AM , Rating: 2
.
.
.


By Sharky974 on 10/5/2006 6:16:41 PM , Rating: 1
Yeah Coldpower was this guy debating me in another thread here a while ago. Basically I told him ATI artificially crippled there chips this generation by not giving them enough TMU's. That is why the R580 despite having vastly more shader power is barely faster than 7900GTX.

I pointed out ATi's chips are almost twice as big, yet barely faster at all than Nvidia's, and that in all previous generations ATI had somehow managed to be just as fast with equal size or smaller to Nvidias, and that if they just had that same efficiency they would be TWICE as fast as Nvidia with R580 (since it is twice as big).

One of his excuses was "well ATI was building for the future. They were putting building block of the future in there and that is why they were so big".

I countered with "So, since they were spending HUGE amounts of silicon not getting performance, but instead building for the future, they will surely be first out of the gate next time?"

Of course he knew that wouldn't happen so he hemmed and hawwed. And now look, Nvidia despite somehow not wasting 50% of their die on "building for the future" (whatever that means) will beat ATI to market big time again.

And I'm an ATI fanboy..that's why I'm so mad when they purposelly cripple their chips.


suspicious
By FXi on 10/5/2006 8:07:55 AM , Rating: 2
This smells of a GX2 style dual card. It might be dual core on a single chip or it might be dual chip but this is how it seems. If so, and only IF, it'll be curious if it carries the same conditions and limitations of SLI to obtain the full power...




Dual-slot
By GraySplatter on 10/5/2006 9:58:41 AM , Rating: 2
Am I the only one who said "Ugh" about the GTS also being dual-slot? A big reason I like the 79x0 GT is because it's single-slot.

Any bets on whether the GTX will have a heat problem?




Air Cooled or water-cooled ?
By kilkennycat on 10/5/2006 2:20:29 PM , Rating: 2
No information on whether the reference card in Daily Tech hands is air-cooled or water cooled. Anybody on the Daily Tech team like to enlighten us on this point? Pictures of the card (both sides) would be most helpful.




eh?
By DeathByDuke on 10/5/2006 8:31:56 PM , Rating: 2
38.4 billion pixels... 67 pipes? eh? (67 * 575)

384 bit memory, 67 pipes, weird numbers for graphics

R600 is rumoured to be 64 pipes and 192 shaders, and definitely not as low a core speed.

G80's independent shader clocks should make some interesting performance analysis.




GTS correction...
By CZroe on 10/7/2006 10:10:37 AM , Rating: 2
Actually, "GTS" was used on their high-end GeForce2 part. The six-month refresh made the GeForce2 Ultra.

Now, we get a "new" card series with new numbering based on pretty much the same architechture instead of straight-up clock and memory speed refreshes, though over time these revisions have added up to significant changes... 6800->7800->7900. This is why we get a new slew of suffixes with every numbering change and there seems to be a GT(S) or GTX/Ultra in every batch.

I'm just poining out that it didn't used to be this way. The refresh ADDED to the current line without creating a new version of the established cards.




Real details
By ali 09 on 10/12/2006 4:27:40 AM , Rating: 2
These details are exactly correct as i have a friend who works in a shop that has recieved the g80's and he told me the specs. The card should be a riper but ill wait to see ati's offering.




By jlaavenger on 10/14/2006 4:19:34 AM , Rating: 2
Does this mean a PhysX card won't be needed?




Power supply requirements
By M9ACE on 10/20/2006 10:59:22 AM , Rating: 2
I am curious as to what size power supplies this series of video card is going to require. If I remember correctly from articles posted a few months ago, the next generation from ATI and NVIDIA are going to be very “thirsty”. In turn the following generation after that (not counting the product refresh cycle releases) should be improved in that they are more efficient.




hahahah
By yacoub on 10/5/06, Rating: -1
RE: hahahah
By coldpower27 on 10/5/2006 8:35:41 AM , Rating: 1
Not really, a 384Bit Memory Interface is already another added level of complexity, were not going to be exploring 512Bit interfaces at all this generation, due to cost.

I rather have a 384Bit interface over the 256Bit one though, it does represent a good increase in bandwidth nonetheless.

Not to mention if the Geforce 8900 is a optical shrink of the 8800 there will be less room for the BGA connections for a complex memory interface such as 512Bit making it unfeasible.


RE: hahahah
By Sharky974 on 10/5/2006 6:22:07 PM , Rating: 2
Good thing ATI was building for the future so they can be last and slowest again eh Coldy?


"We shipped it on Saturday. Then on Sunday, we rested." -- Steve Jobs on the iPad launch

Related Articles
Power and the NVIDIA "G80"
October 4, 2006, 11:56 PM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki