Print 18 comment(s) - last by 3kliksphilip.. on Jan 26 at 12:17 PM

Next generation GeForce 8000-series mid-range details emerge

The GeForce 7600 and 7300 became readily-available, affordable video cards almost overnight.  A major secret to the success of these cards was due to the fact that the GPUs were designed to use the old PCB designs from GeForce 6200 and 6600.  Once again, NVIDIA will take advantage of this mentality again with the next generation G86 and G84 video cards scheduled for release this quarter.

Was it surprising that images of the GeForce 8600 that have been "leaked" out to the internet look identical to GeForce 7600s, or for that matter GeForce 6600s?  It shouldn't be, as that was sort of the whole point in making G84 and G86 pin compatible with G73, which was already pin compatible with NV40

G86 and G84 (G8x family) GPUs do have some major differences between G73 however.  For starters, the main G73 power rails run at 2.5V -- on the G84/G86 these rails run at 1.2V and 1.8V.  The G73 clock generator runs at 1.3V while the G8x family clock generator runs at 3.3V.  Additional dead pins have also been allocated on the G8x family for higher density memory.

One of the bigger surprises of the G8x family is the support for more than 4 GPUs, at least in the design.  It seems pointless to put a low or mid range GPU into SLI mode when a high-end card can usually produce better performance at lower cost.  However, the G8x family design kit touts an interface for "more than 4 GPUs."  Given the unified shader architecture of the GeForce 8000 family, it would be pretty safe to say this additional functionality is probably reserved for some sort of physics project.

The G86 and G84 GPUs are expected to launch before Cebit 2007 this March.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: benchmarks
By acole1 on 1/24/2007 5:49:57 PM , Rating: 2

Try looking at some more applicable benchmarks.

Who buys a $400+ card to play a game at 1024x768 with no AA or AF?!

In that same benchmark @ 1600x1200, 4xAA, 8xAF, the 8800GTS beats the 1900XTX by 27FPS, or 26%.

And it beats the 1950XTX by 24FPS, or almost 24%.

Please leave your bias at the door and present only the facts. Everyone (hopefully) knows that lower resolutions prove hardly anything about graphical power.

RE: benchmarks
By griffynz on 1/24/2007 5:51:32 PM , Rating: 2
Just about to point that out myself....

RE: benchmarks
By 3kliksphilip on 1/26/2007 12:17:58 PM , Rating: 2
I assume that the lower performance at lower resolutions is because of early drivers for the geforce 8800's? Or is it because the geforce 7 (Or equivilent) series is fully optimized for DX9 and the Geforce 8 series is more prepared for DX10 (Because all DX9 games run fast enough any way)?

I remember when dual core processors came out and they weren't quite as fast in games than their single core counterparts. It seemed silly going for a fast single core processor for the same reason as buying a dx9 card now- although slower, the dual cores were more future proof and would stand up better against things which would slow a single core to a crawl.

"Intel is investing heavily (think gazillions of dollars and bazillions of engineering man hours) in resources to create an Intel host controllers spec in order to speed time to market of the USB 3.0 technology." -- Intel blogger Nick Knupffer

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki