backtop


Print 65 comment(s) - last by leexgx.. on Mar 15 at 9:04 PM


NVIDIA GeForce 8600GTS

NVIDIA GeForce 8600GT
NVIDIA prepares its next-generation mid-range and mainstream DirectX 10 GPUs

Earlier today DailyTech received it's briefiing on NVIDIA’s upcoming GeForce 8600GTS, 8600GT and 8500GT graphics processors. NVIDIA’s GeForce 8600GTS and 8600GT are G84-based GPUs and target the mid-range markets. The lower-positioned G86-based GeForce 8500GT serves as the flagship low to mid-range graphics card.

The budget-priced trio feature full support for DirectX 10 features including pixel and vertex shader model 4.0. NVIDIA has yet to reveal the amount of shaders or shader clocks though. Nevertheless, the trio supports NVIDIA SLI and PureVideo technologies.

NVIDIA touts three dedicated video engines on the G84 and G86-based graphics cards for PureVideo processing. The video engines provide MPEG-2 high-definition and WMV HD video playback up to resolutions of 1080p. G84 and G86 support hardware accelerated decoding of H.264 video as well; however, NVIDIA makes no mention of VC-1 decoding. G84 and G86 also feature advanced post-processing video algorithms. Supported algorithms include spatial-temporal de-interlacing, inverse 2:2, 3:2 pull-down and 4-tap horizontal, and 5-tap vertical video scaling.

At the top of the mid-range lineup is the GeForce 8600GTS. The G84-based graphics core clocks in at 675 MHz. NVIDIA pairs the GeForce 8600GTS with 256MB of GDDR3 memory clocked at 1000 MHz. The memory interfaces with the GPU via a 128-bit bus. The GeForce 8600GTS does not integrate HDCP keys on the GPU. Add-in board partners will have to purchase separate EEPROMs with HDCP keys; however, all GeForce 8600GTS-based graphics cards feature support for HDCP.

GeForce 8600GTS-based graphics cards require an eight-layer PCB. Physically, the cards measure in at 7.2 x 4.376 inches and available in full-height only. NVIDIA GeForce 8600GTS graphics cards feature a PCIe x16 interface, unlike ATI’s upcoming RV630. GeForce 8600GTS-based cards still require external PCIe power. NVIDIA estimates total board power consumption at around 71-watts.

Supported video output connectors include dual dual-link DVI, VGA, SDTV and HDTV outputs, and analog video inputs. G84-based GPUs do not support a native HDMI output. Manufacturers can adapt one of the DVI-outputs for HDMI.

NVIDIA’s GeForce 8600GT is not as performance oriented as the 8600GTS. The GeForce 8600GT GPU clocks in at a more conservative 540 MHz. The memory configuration has more flexibility, letting manufacturers decide between 256MB or 128MB of GDDR3 memory. NVIDIA specifies the memory clock at 700 MHz. The GeForce 8600GT shares the same 128-bit memory interface as the 8600GTS. HDCP support on GeForce 8600GT is optional. The GPU and reference board design support the required HDCP keys EEPROM, however, the implementation is up to NVIDIA’s add-in board partners.

GeForce 8600GT-based graphics cards only require a six-layer PCB instead of the eight-layer PCB of the 8600GTS. The physical board size is also smaller too – measuring in at 6.9 x 4.376 inches. GeForce 8600GT-based cards do not require external PCIe power. NVIDIA rates the maximum board power consumption at 43-watts – 28-watts less than the 8600GTS.

The GeForce 8600GT supports similar video outputs as the 8600GTS, however, the 8600GT does not support video input features.

NVIDIA has revealed very little information on the GeForce 8500GT besides support for GDDR3 and DDR2 memory. It supports dual dual-link DVI, VGA and TV outputs as well.

Expect NVIDIA to pull the wraps off its GeForce 8600GTS, 8600GT and 8500GT next quarter in time to take on AMD’s upcoming RV630 and RV610.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

this is a joke, right?
By Andrevas on 3/14/2007 12:58:20 AM , Rating: 0
a 128 bit memory bus?

ARE YOU JOKING ME?!

that's totally a step backward for this card, a 128 bit bus with only a 1000mhz clock for the top of the line 8600

so besides DX10, why the hell should I buy this over a last-gen 7600GT?




RE: this is a joke, right?
By jimmy43 on 3/14/07, Rating: 0
RE: this is a joke, right?
By D4rr3n on 3/14/07, Rating: -1
RE: this is a joke, right?
By ZoZo on 3/14/2007 2:26:17 AM , Rating: 2
Yes it is more expensive to have a larger bus. The cost of manufacturing complex PCBs hasn't gone down as much as the cost of manufacturing ICs (integrated circuits). ICs benefit from the smaller transistors introduced every year (180nm -> 130nm -> 90nm -> 65nm ...).


RE: this is a joke, right?
By Missing Ghost on 3/14/07, Rating: 0
RE: this is a joke, right?
By defter on 3/14/2007 2:14:30 AM , Rating: 3
quote:
so besides DX10, why the hell should I buy this over a last-gen 7600GT?


Uhm, 8600GTS will be much faster than 7600GT...


RE: this is a joke, right?
By tkSteveFOX on 3/14/07, Rating: 0
RE: this is a joke, right?
By cheetah2k on 3/14/07, Rating: 0
RE: this is a joke, right?
By defter on 3/14/2007 3:57:47 AM , Rating: 5
quote:
I only see 8800GTS to be faster than X1950XTX due to its higher clock speeds and a bigger memory bus.


You should get new glasses then. Core clock speed for 8800GTS is 500MHz while 1950XTX runs at 650MHz... You don't see any diffrence between the amount and type of execution units between 8800GTS and 1950XTX?

quote:
So the differance between the 8600GT and the 7600GT would be something like 10%


I wasn't talking about 8600GT which is a slower model. However, the performance difference between 8600GTS and 7600GT in a GPU limited situations will be about 50-100%.

It's funny that you whine so much about 128bit bus in 8600GTS, but then forget that 8600GTS will have 60% bandwidth advantage against 7600GT. Hint: bandwidth = memory bus width * memory clock speed, don't look at memory bus width alone.


RE: this is a joke, right?
By defter on 3/14/2007 8:51:23 AM , Rating: 5
http://www.vr-zone.com/?i=4775

8600GTS + 2.93GHz Core2 = 57xx in 3DMark06 (faster than 7950GT)
8600GT + 2.93GHz Core2 = 47xx in 3DMark06 (faster than 7900GS)

For the comparison:
7600GT + 2.93GHz Core2 = 3333 in 3DMark06: http://service.futuremark.com/compare?3dm06=923714

Thus 8600GTS is over 70% faster than 7600GT in 3DMark06.


RE: this is a joke, right?
By KayKay on 3/14/2007 9:38:16 AM , Rating: 2
thanks... read this people please


RE: this is a joke, right?
By otispunkmeyer on 3/14/2007 3:44:38 AM , Rating: 5
well its 2Ghz effective, so its more bandwidth than a 7600. then add the great standard IQ (G7x series = shimmering hell)

its got 32gig of bandwidth, puts it roughly in line with cards like the 6800GT, 7800GT


RE: this is a joke, right?
By otispunkmeyer on 3/14/2007 3:48:05 AM , Rating: 3
also Nvidia is touting the GTS variant as a replacement to the 7900GT's i think

really we'll have to wait and see. but i agree that we should of seen a move on the bus width in the mid-range, even if it was just to 192bit

ATi are guilty of the same too. RV630 will have 128bit


RE: this is a joke, right?
By UNCjigga on 3/15/2007 9:47:28 AM , Rating: 2
But ATI/AMD (is it ok to just call them AMD now?) will have GDDR4 vs. GDDR3...


RE: this is a joke, right?
By mezman on 3/14/2007 1:48:00 PM , Rating: 4
Patience. Wait for benchmarks. You don't think that nVidia has thought about the width of the memory bus? You don't think that nVidia has considered it's impact on performance? Take a pill man and wait a few weeks for some performance details.


RE: this is a joke, right?
By InsaneScientist on 3/14/2007 7:30:18 PM , Rating: 3
WTF is up with all the whining about the midrange cards having 256-bit buses.

Guys... the memory bus width does not matter: the overall memory bandwith (of which the bus width is a factor) does.
Increasing the bus width will increase the bandwith (a lot), yes, but it also increases the cost (a lot).
The whole point of a midrange card is that it doesn't break the bank.
There are far more economical ways of increasing bandwith - namely ramping the clock speed on the memory, switching to GDDR4 when you run out of headroom on GDDR3, and only then increasing the bus width (that is assumming that GDDR5 isn't out by the time we can't go further with GDDR4)

I, personally, like the fact that they're increasing the memory speed rather than the bus width. It still increases the memory bandwith, and doesn't cost a fortune.
For those who want to pay an arm and a leg, that's fine, but don't make the rest of us pay for it.


"If you mod me down, I will become more insightful than you can possibly imagine." -- Slashdot

Related Articles
The Future of HDMI
February 19, 2007, 12:47 AM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki