backtop


Print 110 comment(s) - last by IGoodwin.. on Jan 10 at 5:42 PM

NVIDIA's D9M makes its first appearance on corporate roadmaps

NVIDIA's newest mid-range processor, codenamed D9M, will make its official debut as the GeForce 9600 GT.

Corporate guidance from NVIDIA lists the initial GeForce 9600 GT shipments come stock with a 650 MHz core clock and a 1625 MHz unified shader clock.  Unlike the G84 core found on GeForce 8600 GT, D9M will feature a 256-bit memory bus interface.  Coupled with a 900 MHz memory clock, NVIDIA calculates the memory bandwidth at 57.6 GB/s. 

The texture fill rate is estimated at 20.8 billion pixels per second.  The company would not indicate how many shaders or stream processors reside on the D9M core. 

Late last year, NVIDIA confirmed the D9 family will use TSMC's 65nm process node.  The company introduced its first 65nm processor shrink in November 2007: the G92

Other details of the D9M family have already surfaced.  ChileHardware published slides yesterday claiming the GeForce 9600 requires a 400W power supply that requires 26A on the 12V rail.  Unlike previous mid-range GeForce cards, the D9M will require a 6-pin supplementary power connector.

NVIDIA publicly confirmed other details of D9M: DirectX 10.1 support, Shader Model 4.0, OpenGL 2.1 and PCIe 2.0 support just to name a few. 

Further documentation from NVIDIA claims the 9600 GT will also support the Quantum Effects physics processing engine. 

Like all NVIDIA processors, the GeForce 9600 is also HDCP compatible, though final support still depends on vendor implementation. 

NVIDIA declined to comment on expected price of GeForce 9600.   A representative for NVIDIA would comment that the performance increase between GeForce 9600 and GeForce 8600 is "almost double."


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By mindless1 on 1/4/2008 10:59:34 PM , Rating: 2
Up to twice as fast seems really good, but historically we saw this tends to mean the largest performance gains are in the newer DX features, not the raw performance needed to stay at good framerates if you turn down eyecandy enough to make a new game playable with what will probably end up being a ~ $150, then down to $75 AR (given some time, the span of 6-12 months) part.

It's good news that the average midrange cards performance goes up a bit every generation but I think it being 2X as fast as 8600GT has a little to do with 8600GT being a smaller performance increase than preceeding generations saw compared to the generations they replaced (with exception of the worse FX5nnn series).

Even with this good news, common sense tells us it isn't going to be a replacement for a higher end card, nVidia isn't often going to kill their high end like they did with the 8800GT, they just couldn't put off getting this tech into the market until it had depreciated more as they now show they have more parts coming as always.

Gamers are SO stingy though, they have a multi-hundred dollar system and (supposedly, unless they're pirating the games) paying $30 plus per game, then will wait weeks or spend hours trying to get a product with only a few dozen dollars difference in price. If paying $50 more makes that much difference should you really be gaming? It's not that I don't like good value, even getting something for nothing is a novel quest, but if you gamed a dozen hours you had the spare time, could've just worked at McDonalds/etc for that period and bought a higher tiered card instead.




"Intel is investing heavily (think gazillions of dollars and bazillions of engineering man hours) in resources to create an Intel host controllers spec in order to speed time to market of the USB 3.0 technology." -- Intel blogger Nick Knupffer

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki