backtop


Print 65 comment(s) - last by leexgx.. on Mar 15 at 9:04 PM


NVIDIA GeForce 8600GTS

NVIDIA GeForce 8600GT
NVIDIA prepares its next-generation mid-range and mainstream DirectX 10 GPUs

Earlier today DailyTech received it's briefiing on NVIDIA’s upcoming GeForce 8600GTS, 8600GT and 8500GT graphics processors. NVIDIA’s GeForce 8600GTS and 8600GT are G84-based GPUs and target the mid-range markets. The lower-positioned G86-based GeForce 8500GT serves as the flagship low to mid-range graphics card.

The budget-priced trio feature full support for DirectX 10 features including pixel and vertex shader model 4.0. NVIDIA has yet to reveal the amount of shaders or shader clocks though. Nevertheless, the trio supports NVIDIA SLI and PureVideo technologies.

NVIDIA touts three dedicated video engines on the G84 and G86-based graphics cards for PureVideo processing. The video engines provide MPEG-2 high-definition and WMV HD video playback up to resolutions of 1080p. G84 and G86 support hardware accelerated decoding of H.264 video as well; however, NVIDIA makes no mention of VC-1 decoding. G84 and G86 also feature advanced post-processing video algorithms. Supported algorithms include spatial-temporal de-interlacing, inverse 2:2, 3:2 pull-down and 4-tap horizontal, and 5-tap vertical video scaling.

At the top of the mid-range lineup is the GeForce 8600GTS. The G84-based graphics core clocks in at 675 MHz. NVIDIA pairs the GeForce 8600GTS with 256MB of GDDR3 memory clocked at 1000 MHz. The memory interfaces with the GPU via a 128-bit bus. The GeForce 8600GTS does not integrate HDCP keys on the GPU. Add-in board partners will have to purchase separate EEPROMs with HDCP keys; however, all GeForce 8600GTS-based graphics cards feature support for HDCP.

GeForce 8600GTS-based graphics cards require an eight-layer PCB. Physically, the cards measure in at 7.2 x 4.376 inches and available in full-height only. NVIDIA GeForce 8600GTS graphics cards feature a PCIe x16 interface, unlike ATI’s upcoming RV630. GeForce 8600GTS-based cards still require external PCIe power. NVIDIA estimates total board power consumption at around 71-watts.

Supported video output connectors include dual dual-link DVI, VGA, SDTV and HDTV outputs, and analog video inputs. G84-based GPUs do not support a native HDMI output. Manufacturers can adapt one of the DVI-outputs for HDMI.

NVIDIA’s GeForce 8600GT is not as performance oriented as the 8600GTS. The GeForce 8600GT GPU clocks in at a more conservative 540 MHz. The memory configuration has more flexibility, letting manufacturers decide between 256MB or 128MB of GDDR3 memory. NVIDIA specifies the memory clock at 700 MHz. The GeForce 8600GT shares the same 128-bit memory interface as the 8600GTS. HDCP support on GeForce 8600GT is optional. The GPU and reference board design support the required HDCP keys EEPROM, however, the implementation is up to NVIDIA’s add-in board partners.

GeForce 8600GT-based graphics cards only require a six-layer PCB instead of the eight-layer PCB of the 8600GTS. The physical board size is also smaller too – measuring in at 6.9 x 4.376 inches. GeForce 8600GT-based cards do not require external PCIe power. NVIDIA rates the maximum board power consumption at 43-watts – 28-watts less than the 8600GTS.

The GeForce 8600GT supports similar video outputs as the 8600GTS, however, the 8600GT does not support video input features.

NVIDIA has revealed very little information on the GeForce 8500GT besides support for GDDR3 and DDR2 memory. It supports dual dual-link DVI, VGA and TV outputs as well.

Expect NVIDIA to pull the wraps off its GeForce 8600GTS, 8600GT and 8500GT next quarter in time to take on AMD’s upcoming RV630 and RV610.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

this is a joke, right?
By Andrevas on 3/14/07, Rating: 0
RE: this is a joke, right?
By jimmy43 on 3/14/07, Rating: 0
RE: this is a joke, right?
By D4rr3n on 3/14/07, Rating: -1
RE: this is a joke, right?
By ZoZo on 3/14/2007 2:26:17 AM , Rating: 2
Yes it is more expensive to have a larger bus. The cost of manufacturing complex PCBs hasn't gone down as much as the cost of manufacturing ICs (integrated circuits). ICs benefit from the smaller transistors introduced every year (180nm -> 130nm -> 90nm -> 65nm ...).


RE: this is a joke, right?
By Missing Ghost on 3/14/07, Rating: 0
RE: this is a joke, right?
By defter on 3/14/2007 2:14:30 AM , Rating: 3
quote:
so besides DX10, why the hell should I buy this over a last-gen 7600GT?


Uhm, 8600GTS will be much faster than 7600GT...


RE: this is a joke, right?
By tkSteveFOX on 3/14/07, Rating: 0
RE: this is a joke, right?
By cheetah2k on 3/14/07, Rating: 0
RE: this is a joke, right?
By defter on 3/14/2007 3:57:47 AM , Rating: 5
quote:
I only see 8800GTS to be faster than X1950XTX due to its higher clock speeds and a bigger memory bus.


You should get new glasses then. Core clock speed for 8800GTS is 500MHz while 1950XTX runs at 650MHz... You don't see any diffrence between the amount and type of execution units between 8800GTS and 1950XTX?

quote:
So the differance between the 8600GT and the 7600GT would be something like 10%


I wasn't talking about 8600GT which is a slower model. However, the performance difference between 8600GTS and 7600GT in a GPU limited situations will be about 50-100%.

It's funny that you whine so much about 128bit bus in 8600GTS, but then forget that 8600GTS will have 60% bandwidth advantage against 7600GT. Hint: bandwidth = memory bus width * memory clock speed, don't look at memory bus width alone.


RE: this is a joke, right?
By defter on 3/14/2007 8:51:23 AM , Rating: 5
http://www.vr-zone.com/?i=4775

8600GTS + 2.93GHz Core2 = 57xx in 3DMark06 (faster than 7950GT)
8600GT + 2.93GHz Core2 = 47xx in 3DMark06 (faster than 7900GS)

For the comparison:
7600GT + 2.93GHz Core2 = 3333 in 3DMark06: http://service.futuremark.com/compare?3dm06=923714

Thus 8600GTS is over 70% faster than 7600GT in 3DMark06.


RE: this is a joke, right?
By KayKay on 3/14/2007 9:38:16 AM , Rating: 2
thanks... read this people please


RE: this is a joke, right?
By otispunkmeyer on 3/14/2007 3:44:38 AM , Rating: 5
well its 2Ghz effective, so its more bandwidth than a 7600. then add the great standard IQ (G7x series = shimmering hell)

its got 32gig of bandwidth, puts it roughly in line with cards like the 6800GT, 7800GT


RE: this is a joke, right?
By otispunkmeyer on 3/14/2007 3:48:05 AM , Rating: 3
also Nvidia is touting the GTS variant as a replacement to the 7900GT's i think

really we'll have to wait and see. but i agree that we should of seen a move on the bus width in the mid-range, even if it was just to 192bit

ATi are guilty of the same too. RV630 will have 128bit


RE: this is a joke, right?
By UNCjigga on 3/15/2007 9:47:28 AM , Rating: 2
But ATI/AMD (is it ok to just call them AMD now?) will have GDDR4 vs. GDDR3...


RE: this is a joke, right?
By mezman on 3/14/2007 1:48:00 PM , Rating: 4
Patience. Wait for benchmarks. You don't think that nVidia has thought about the width of the memory bus? You don't think that nVidia has considered it's impact on performance? Take a pill man and wait a few weeks for some performance details.


RE: this is a joke, right?
By InsaneScientist on 3/14/2007 7:30:18 PM , Rating: 3
WTF is up with all the whining about the midrange cards having 256-bit buses.

Guys... the memory bus width does not matter: the overall memory bandwith (of which the bus width is a factor) does.
Increasing the bus width will increase the bandwith (a lot), yes, but it also increases the cost (a lot).
The whole point of a midrange card is that it doesn't break the bank.
There are far more economical ways of increasing bandwith - namely ramping the clock speed on the memory, switching to GDDR4 when you run out of headroom on GDDR3, and only then increasing the bus width (that is assumming that GDDR5 isn't out by the time we can't go further with GDDR4)

I, personally, like the fact that they're increasing the memory speed rather than the bus width. It still increases the memory bandwith, and doesn't cost a fortune.
For those who want to pay an arm and a leg, that's fine, but don't make the rest of us pay for it.


slow
By xsilver on 3/14/2007 12:36:04 AM , Rating: 1
this card doesnt even look like its going to beat a 7900GS/1950pro --- and yet the price is probably going to be the same?

whats the point? DX10? pffft....




RE: slow
By BladeVenom on 3/14/2007 1:14:36 AM , Rating: 4
The point of DX10 is to sell Vista. Don't worry though, most games will work with DX9 for years to come. Those cards will be long obsolete before DX10 is needed for most games.


RE: slow
By xsilver on 3/14/2007 5:37:01 AM , Rating: 1
so true -- and not even most games, first games.
the 6600gt, the first midrange card with ps 3.0 is not fast enough to run the first game that REQUIRES ps3.0 (splinter cell DA I think) above minimum resolutions/details

I think the best we can hope for is for the 8800gts 320mb to drop down in price to where the 7950gt is now.

I think the comparisons between the 7600gt and 8600gts are also inappropriate; the 8600gts SHOULD be as fast as last generations best card; the 7800GTX, however its more likely that it will only perform like a 7800gt. There was a huge clock difference between the two 550mhz vs 400mhz core and 1700mhz vs 1000mhz memory. Which equates to about a 30fps difference in HL2 @ 1280 4xAA/8xAF
http://www23.tomshardware.com/graphics.html?modelx...


RE: slow
By coldpower27 on 3/14/2007 10:49:57 AM , Rating: 3
They are appropriate as they have the same introduction MSRP $199USD, and the 8600 GTS is meant as the 7600 GT replacement, but it will have the performance level in the ballpark of the 7950 GT.

The 7800 GTX 512, please specify that as there is a difference between the 256 and 512 versions, is about on the same level as the 7900 GTX give or take.

The 7950 GT ballpark is relatively close enough to the 7900 GTX in that it represents a substantial improvement. It will basically be equal to 7600 GT SLI, as that is what a 7950 GT is basically.


RE: slow
By leexgx on 3/15/2007 8:55:33 PM , Rating: 2
Clock speeds are not the same meaning on G80 spec compared to g70 and lower

its like compareing an Intel Core 2 Duo Slower cpu to an P4 cpu that is faster by 1ghz the C2D cpu is faster as it can process more per clock

same goes for the 8xxx cards thay Do not perform the same way the norm performace curve has gone any dx9 games should perform alot better on the 8600 cards then the 7600 cards maybe even the 7800 cards as
even the 8800 gts 320mb is still CPU bound at lower res (if you call 1600x1280 low heh)

8600 id probly bet it meat up with an 7800 cards but we have to wait and see the tests to be done

and these cards are ment to be afordable cards not any thing to fast (but thay still be good)


RE: slow
By leidegre on 3/14/2007 1:37:25 AM , Rating: 2
What about architectural diffrecenes, I bet you even if the spec is exactly the same, things tend to run faster, with a cleaver design, and in terms of G70/G80, any G80 card should be faster than any G70, (almost anyway).


RE: slow
By rmaharaj on 3/14/2007 1:57:37 AM , Rating: 4
May I suggest that we refrain from judging performance until there are reliable benchmarks?

Nobody here has hardware ESP (and that's NOT the Electronic Stability Program on your Saab); you can't judge performance based on incomplete technical specs. That would require intimate knowledge of the G80 architecture that we just don't have.

Furthermore, nobody from NVIDIA ever said that the 8600s would have earth-shattering performance. Historically, the top-end midrange card of the new gen (e.g. 7600GT) is right on par with the high-end cards of the last gen (e.g. 6800GT) but with more advanced shader technology. So if you're disappointed that the 8600GTS won't trounce the 7900GTX, you've got only yourself to blame.


RE: slow
By iviythos on 3/14/07, Rating: 0
RE: slow
By Lakku on 3/14/2007 3:59:34 PM , Rating: 2
So 550 to 570 is in the stratosphere for a top performing 8800GTX? Or 300 is in the stratosphere for a great performing 8800GTS 320mb? If that is the case, you are in the wrong business of being a computer enthusiast or playing PC games. Why does everyone expect a MIDRANGE card to perform so well? If you are worried, buy the 8800GTS 320, which can be had for a good price. If you can't, wait for numbers from the midrange, because I can bet that they will work JUST FINE at 1280x1024 on most new and most upcoming games.


Come on people
By shabodah on 3/14/2007 8:11:33 AM , Rating: 2
Get with the program people. The midrange cards are always going to be built as cheaply as possible. Sure, a 256bit bus would be better, but you guys are not giving these things any credit. They WILL beat every previous generation card, including a x1950xtx, even if just barely. Their shader power is massively better than the 7600gt, and even with it's 128bit bus, the 7600GT beat almost every 256bit bus card of the generation before it, and there was LESS of a difference between 6 series and 7 series cards than there will be between 7 and 8. This generation of cards is going to be the biggest jump we've seen. Compair the 8800GTX to the 7900GTX and tell me we have ever seen a jump that huge.




RE: Come on people
By Etern205 on 3/14/2007 12:02:03 PM , Rating: 2
The 7900GTX has a 256bit memory bus, while
the 8800GTX has a 384bit memory bus, plus
new architectures, the 7900GTX does not have.
e.g. Stream Processors. Of course it's going to be faster
Do your research you freaking noob!

Also as for the Nvidia's Cards, they shouldn't have done
something like this...

High-End: 320-384+bits
Midrange: 256-300bits
Low-End: 64-128bits


RE: Come on people
By mezrah on 3/14/2007 2:14:58 PM , Rating: 2
Good points about the high end, although I don't think he deserved to be called a noob, but whatever.

I think the point most people are missing is that we are looking at mid-range cards here. From a cost standpoint, Nvidia can't just throw 256-bit or higher into mid-range cards. Perhaps in the future, but we arent' at that point in time yet. A more realistic breakdown for now is:

High-End: 256-bit and up
Mid-Range: 128-bit to possibly 192-bit?
Low-End: 64-bit to 128-bit


RE: Come on people
By shabodah on 3/14/2007 2:53:41 PM , Rating: 1
You're one of those people who'd prefer a 500mhz 256bit setup over a 1000mhz 128bit one, eh? lol.


RE: Come on people
By InsaneScientist on 3/14/2007 7:37:03 PM , Rating: 1
LOL - so true.... And here we thought that the ignorant always believe that a higher clockspeed is better.

For anyone who doesn't know:
a 256-bit bus with 500MHz memory will have the exact same bandwidth as a 128-bit bus with 1000MHz memory, but the 256-bit bus with the slower memory will be a whole lot more expensive.


RE: Come on people
By coldpower27 on 3/14/2007 6:15:19 PM , Rating: 2
And just the same thing when you do that to the mainstream.

7600 GT has a 128Bit Memory Interface at 1.4GHZ
8600 GTS has a 128Bit Memory Interface at 2.0GHZ plus the New Stream processors the older architecture doesn't have.

Will it be enough to beat the 7900 GTX, it will be close as the projected level is 7950 GT ballpark performance. Not bad for a mainstream card based on the Geforce 8 architecture.

256Bit or Higher Interfaces generally have minimum die size requirements of around 200mm2 or greater, or you have to up PCB complexity considerably, either way increases production costs considerably which is unacceptable for a mainstream SKU.

So unless we find a way to make 200mm2 die sizes, much more affordable, maybe when we switch to 450mm Wafer technology, sometime next decade... or some major breakthrough comes in on PCB technology. You will always have high clocked memory on 128Bit Interface on the mainstream SKU.

Upper Enthusiast: 320/384Bit or more...
Low Enthusiast: 256Bit
Performance: 128Bit
Upper Value: 128Bit
Lower Value: 64Bit


Disappointing 128-bit bus
By hadifa on 3/14/07, Rating: 0
RE: Disappointing 128-bit bus
By R3MF on 3/14/07, Rating: 0
RE: Disappointing 128-bit bus
By coldpower27 on 3/14/2007 10:52:50 AM , Rating: 2
This isn't possible as Nvidia is reserving the 256Bit Bus Interfaces for Lower Range High End SKU's rather then the mid range. The mid range will remain at 128Bit, probably until some breakthrough comes along that reduces the actual cost of implementing 256Bit Bus.


RE: Disappointing 128-bit bus
By InsaneScientist on 3/14/2007 7:49:47 PM , Rating: 3
Wow... I can't believe the amount of complaining about this 128-bit bus that I'm seeing as I go through these comments.

First, it's not possible to have a 172 bit bus. Each memory chip has a 32-bit interconnect, so for each additional chip, you add another 32 bits to the bus with. No other figure is even theoretically possible.
After 128 we'd see 160, then 192, 224, and then 256. And 160 and 224 aren't at all likely as they would require an odd number of chips.

Second, the bus may be only 128-bit, but with the memory clocked at 1GHz it still blows past the old generation. The 7600GT has 22.4GB/s of memory bandwith. Memory at 1GHz on a 128-bit bus should yield 32GB/s bandwidth.
That's an increase of 10GB/s! Almost 50%!
Why are we complaining????

These aren't going to be ultra high end GPUs. There comes a certain point where more memory bandwith becomes redundant because the GPU can't process any faster. Going to a 256-bus would increase the cost, but not neccesarily much more than that.


RE: Disappointing 128-bit bus
By hadifa on 3/15/2007 8:27:57 PM , Rating: 2
With cards like 1950 pro and 7900 GS, the expectation from the upper midrange cards is not what it used to be.In the price range of US$ 200, we should expect solid performers that do not cost as much as the higher end.

quote:
First, it's not possible to have a 172 bit bus. Each memory chip has a 32-bit interconnect, so for each additional chip, you add another 32 bits to the bus with. No other figure is even theoretically possible.


I made a mistake. I calculated 128+64 and got 172!! I meant 192-bit. thanks for the correction.


AVIVO
By GTaudiophile on 3/14/2007 6:51:11 AM , Rating: 2
It appearas that ATI's trump card will be AVIVO. Their midrange cards will have HDCP/HDMI support built-in. Offerings from both companies feature a 128-bit bus, but ATI's cards will draw less power.

All in all, I am prefering ATI's lineup at this point.




RE: AVIVO
By GTaudiophile on 3/14/2007 6:52:42 AM , Rating: 2
AMD is prepared to take on NVIDIA’s PureVideo HD with its next-generation AVIVO video processing. AVIVO is receiving its first upgrade since its introduction with the Radeon X1k-series with the RV610 and RV630. This time around, AMD is integrating its Universal Video Decoder, or UVD, for hardware decoding of H.264 and VC-1 high-definition video formats.

AMD’s UVD expands on the previous generation’s AVIVO implementation to include hardware bit stream processing and entropy decode functions. Hardware acceleration of frequency transform, pixel prediction and deblocking functions remain supported, as with the first generation AVIVO processing. AMD’s Advanced Video Processor, or AVP, has also made the cut for low power video processing.

Integrated HDMI with support for HDCP joins the next-generation AVIVO video processing for protected high-definition video playback. Unlike current HDMI implementations on PCIe graphics cards, RV610 and RV630 integrate audio functionality into the GPU. Instead of passing a PCM or Dolby Digital signal from onboard audio or a sound card, RV610 and RV630-based graphics cards can directly output audio – removing the need of a separate sound card.


Don't be so quick to judge..
By superunknown98 on 3/14/2007 8:30:53 AM , Rating: 3
While I would like to see a 256-bit Memory bus, but it's not always the defining factor of performance. Remember that a 6800 Ultra is slower than a 7600GT despite the 6800ultra's 256-bit memory bus.

Memory bandwitdh is important, however these days the trend seems to be with increasing shader performance above all. Look at the 1900xtx vs the 1800xtx. Yes it had more Memory bandwidth with it's gddr4, but the main improvment was with shaders.

Why don't we wait for more information or previews before we bring out the torches and pitchforks.




RE: Don't be so quick to judge..
By FITCamaro on 3/14/2007 8:48:35 AM , Rating: 1
X1900 series used the GDDR3 same as the X1800 series. Only the X1950XTX used GDDR4. I have one though and it is quite a good card. I run Vanguard: SOH at 1280x1024 at max detail and get 50-60 fps in the open and in dungeons, 20-30 fps average in cities (still perfectly playable). All running at a "cool" 40C despite OC on memory and core thanks to Swiftech's Apex Ultra+ water cooling system.


AGP?
By Rocket321 on 3/14/2007 9:22:49 AM , Rating: 2
Yah yah, I know, most of you are laughing but I really hope they decide to release an agp version of the midrange cards a little before the last week of production.

Afterall, I would assume a good portion of "midrange" gamers are still on AGP...




RE: AGP?
By Biggie on 3/14/2007 11:14:26 AM , Rating: 2
That's what I'm waiting for to see how the 8600 AGP stacks up to the x1950Pro. Hopefully I will be able to get a little more life out of antiquated system.


Don't see why people are shocked
By Biggie on 3/14/2007 10:29:41 AM , Rating: 2
When they said that the 8600 would be pin compatible with the 6600/7600, that should have tipped you off to the fact that it would have a 128bit memory interface. The really issue, for me at any rate, is when are the AGP versions coming out and how will the be nerfed vis-a-vis the PCI-E versions.




By coldpower27 on 3/14/2007 10:56:59 AM , Rating: 2
There are motherboards available with PCI-E for Socket 478,754,939 so it shouldn't be an issue.


By cheetah2k on 3/14/2007 3:23:27 AM , Rating: 2
When in the hell is the GO 8xxx series coming for laptops?? I'm hanging out for this to pick up a mobile beast while i'm on the road.

Please Nvidia!! Tell us SOOOOOON!




Pricing
By restrada on 3/14/2007 3:59:28 AM , Rating: 2
Still curious about pricing? I suspect it will be in line with last gen. stuff.




This explains the pricing
By Ickis13 on 3/14/2007 4:44:53 AM , Rating: 2
LOL - thought this might be the case. The pricing I saw earlier seemed to be good to be true. Gotta say it's a bit disappointing - 128 bit memory bus sucks. I hope they're dropping the price of the 320MB 8800GTS's as an apology for these cards. To be honest, it was more the number of pipelines than the dx10 compatibility I was interested in.
I'm more than happy with what developers can crank out for dx9 and quite thankful most are sticking to this version as I've given vista a shot (win2k SP7 imho) and was pretty disenchanted. Having seen the screenshots of "what dx10 can do", I've gotta say it's not that mind blowing. Is it dx10 that delivers the goods or is it just that it's the current platform devs are using because it's new? Would dx9 be able to deliver fairly similar results were the same resources given to the devs to develop for that version and run on new hardware? I'd be happy to embrace dx10 if it was released for XP but I'm not migrating to vista until I absolutely have to (ie - the software I use "vista only").
Gar - it's a bit of an interesting / crappy time in the tech field atm...
I think we're seeing more of a marketing / sales field day at the moment rather than any ground breaking new technologies.




8600 GTS looks nice...
By Warren21 on 3/14/2007 2:52:48 PM , Rating: 2
Both visually and in specs, this card looks to be a great buy for the 200 USD or so MSRP (if performance holds up, of course). With it's estimated/rumoured 64 shaders and 1 GHz clock (2 GHz effective DDR) memory, it looks to trump the X19x0 GT/Pro and the 79x0 GS/GT, possibly even the XTX and GTX models which would be great. It also features DX10, but let's be realistic: by the time DX10 arrives the refresh DX10 cards might even be out already, and by the time DX10 goes mainstream these cards will be antiquated.

A lot of you are worried about the 128-bit 'fiasco' but like many others have pointed out, 256-bit PCBs aren't cheap. If you jack up the clocks enough you can hide that deficit, even destroy 256-bit cards and all the while maintaining a lower cost. Don't knock 128-bit cards, see 7600 GT vs. X1800 GTO.

My only question is how will this stack up agaisnt the X2600/RV630 high end offering... My bet is that the ATI offering will cost slightly more while offering slightly better stock performance but less overclocking potential... At least that has been the trend... (See X1650 XT vs. 7600 GT, X1900 GT vs 7900 GS)




You do realize....
By kilkennycat on 3/14/2007 3:42:31 PM , Rating: 2
... that with the release of this family of graphics cards ( and the AMD/ATi equivalents ) all previous card variants will be at "end of life". No further reason to produce them. All of these new-gen cards are fully DX9 compatible and of equivalent or superior performance to the existing DX9 variants within the same price-range. I would not be at all surprised if nVidia and AMD/ATi both produce AGP variants of the 8xxx and RV6xx in very short order.

Afaik, the nVidia/TSMC plan is to cease any further chip-production of each Dx9/SM3 variant GPU once a legitimate replacement part is in place and just to run down the existing chip-inventories. So expect to see the pre-Dx10 cards heavily discounted at your favorite computer-parts outlet by early-Summer.

And when Crysis and the other Dx10-compatible games ( such as the PC version of Gears of War and UT2007 -- or whatever it is now called ) tumble out of the chute, how many of the naysayers in this thread will be suddently changing their minds about a graphics card update ?? Seen the Dx10 movies and still-images from Crysis yet ?




should still be fast as mofo
By HexiumVII on 3/15/2007 4:31:48 AM , Rating: 2
The 88XX should be quite a bit faster than the 7 series. It will also use memory bandwidth a lot more efficiently than in the 7 series. If you look at the stop from 6 to 7 series, the bandwidth efficiency bumped up to 30-40% in some instances. Even with less overall bandwidth, 7 series would be much faster than 6 given the same core clocks/fill rate specs. Also, it will hopefully raise the low end bar a little, as game progammers still have to worry about the 80% of the Intel Extreme Crap graphics cards in everyones puters.




Only 256 mb?
By derwin on 3/14/07, Rating: -1
RE: Only 256 mb?
By crazydingo on 3/14/07, Rating: -1
RE: Only 256 mb?
By shabodah on 3/14/2007 10:46:54 AM , Rating: 2
Wait for the real reviews with actual game benchmarks. The maximum framerates might not be significantly better, but you better believe the minimum ones will have changed alot.


RE: Only 256 mb?
By coldpower27 on 3/14/2007 11:02:14 AM , Rating: 2
Actually if that were the case Nvidia would have done it already.

High frequency RAM actually comes down in price as newer manufacturing process are developed and hitting a certain bin becomes easier.

Wider Interfaces don't drop in price from a production perspective.

Well, not everyone reads all the different news publications and they are different SKU's to be fair.


WMV HD = VC-1
By politicalslug on 3/14/07, Rating: -1
RE: WMV HD = VC-1
By Anh Huynh on 3/14/2007 5:09:51 AM , Rating: 5
Yes, that is true, but we'd rather not jump to conclusions and add facts that aren't mentioned in documents. If NVIDIA explicitly stated VC-1 acceleration, we would've reported it, but they didn't, unlike ATI and Intel documents which explicitly state VC-1 hardware acceleration.


RE: WMV HD = VC-1
By phusg on 3/14/07, Rating: 0
RE: WMV HD = VC-1
By Chris Peredun on 3/14/2007 8:25:36 AM , Rating: 4
Correct me if I'm wrong, but isn't VC-1 the standard itself, and WMV-HD an implementation of it? WMV9 served as the code basis for VC-1, which WMV-HD was an implementation of?

If this is the case, the 8600/8500 may only be able to play back WMV-HD content (WMV3, WMVA, WVC1) and not other VC-1 content. If a fully standards-compliant decoder isn't implemented in the hardware, they might be setting themselves up for a fall. If they claim VC-1 compliance, and a later implementation of the SMPTE 421M codec standard (aka "VC-1") becomes popular - but fails to be accelerated properly - there will be a lot of angry videophiles.


RE: WMV HD = VC-1
By Alphafox78 on 3/14/07, Rating: 0
RE: WMV HD = VC-1
By Alphafox78 on 3/15/2007 8:09:47 AM , Rating: 2
Why did I get downrated for posting what my experiences are? sorry to disappoint anyone who thought they HAD to get a new HW accelerated video card to play VC-1 content. Trust me, you dont; I have the 360 HD-DVD drive and the CPU matters more than the video.


RE: WMV HD = VC-1
By Aikouka on 3/14/2007 9:57:52 AM , Rating: 4
I think you were right in your decision Anh, especially when referencing an article on Wikipedia about WMV and VC-1.

This remark here about hardware and software decoders:

quote:
Some 3rd party hardware and software decoders only decode WMVA based content.


Then coupled with this remark:

quote:
As of 2006, WMVA is considered a deprecated codec because it is not fully VC-1 compliant.


If it don't do WVC1, it isn't fully VC-1 compliant, so simply stating WMV HD, which is only content encoded in WMV9 (which includes WMVA, WMV3 and WVC1), so who knows what they're including support for! It's better to be safe than sorry.


Nvidia news...(repost).........
By crystal clear on 3/14/07, Rating: -1
RE: Nvidia news...(repost).........
By leexgx on 3/15/2007 9:04:44 PM , Rating: 2
that be good once thay get some realy cheap 8xxx cards even something like the 8300 or (made up) 8200 cards come out will be moderated faster then there older brothers


"I'm an Internet expert too. It's all right to wire the industrial zone only, but there are many problems if other regions of the North are wired." -- North Korean Supreme Commander Kim Jong-il

Related Articles
The Future of HDMI
February 19, 2007, 12:47 AM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki