backtop


Print 26 comment(s) - last by larson0699.. on Jun 4 at 4:02 PM

NVIDIA announces GeForce 9M, takes subtle jabs at Intel

NVIDIA is looking to up the ante in the realm of notebook GPUs with its new GeForce 9M series. NVIDIA is banking on both power and efficiency to win over OEMs and end-users with the new GPUs.

NVIDIA claims that the new GeForce 9M GPUs are up to 40% faster than the previous generation GeForce 8M parts and as much as ten times faster than a certain chip giant's "generic" integrated GPUs -- namely, Intel. All of the new GPUs incorporate PureVideo HD and support DVI, HDMI 1.3, DisplayPort 1.1, and Blu-ray Profile 2.0. All GeForce 9M GPUs comply with the MXM 3.0 graphics module specification.

On other feature to consider with the GeForce 9M is the inclusion of Hybrid SLI technology. This allows OEMs to incorporate a low-power GPU for everyday desktop duties and a higher-performing part for graphics-intensive duties -- NVIDIA SLI support may be added at a later date.

"Beginning this summer, GeForce 9M GPUs and Hybrid SLI, paired with AMD and Intel CPUs, will enable a new breed of notebooks," said Jeff Fisher, NVIDIA's GPU Senior VP. "These new notebooks will be optimized to deliver a visual experience and raw computing performance that traditional cookie-cutter notebooks with integrated graphics simply can’t touch."

In addition, this new GPU features a multi-core architecture which will not only speed up entertainment applications, but will also speed up today’s lifestyle applications, like video encoding from a PC to a small personal media device, where the speed up in the video conversion is up to 5x faster with the GeForce 9M family GPUs."

NVIDIA has broken the GeForce 9M into three categories: Value, Mainstream, and Performance. The Value sector will be solely represented by the GeForce 9100M G. The Mainstream sector will be propped up by the GeForce 9400M, GeForce 9300M GS, and GeForce 9200M GS. Finally, the Performance sector features the GeForce 9600M GT, GeForce 9600M GS, and GeForce 9500M G.

Unlike some of ATI's latest graphics offerings, the new GeForce 9M series will not support DirectX 10.1. NVIDIA says that consumers will base their buying decisions on price and performance and that support for a particular API is not of extreme importance.

For those looking to take advantage of PhysX -- a recent acquisition of NVIDIA's -- drivers are expected to be made available during Q3 2008.

NVIDIA definitely isn’t holding back with its ribbing of Intel with this latest GPU release. Intel has expressed its intentions to bulk up its integrated GPU offerings and expand into discrete graphics. NVIDIA is fighting back by littering subtle jabs in its press releases (“generic”, “cookie cutter”, etc.) and with not so subtle comments from its CEO.

It should be interesting to see how things in the graphics market pan out as Intel and NVIDIA continue to cross paths within the next 18 months.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By StevoLincolnite on 6/3/2008 6:54:13 PM , Rating: 2
I think nVidia is in a better position in terms of technology in the graphics department right now, Larrabee, probably wont be the Rasterisation renderer speed demon that many hoped, nVidia also have 3dfx under there hood. (The ones who many believe are what kick started the 3D acceleration era).
Mind you, there job on the Geforce FX was not all that flash, but since then they have had a solid line-up.

There is only 1 problem, I see with the nVidia Graphics cards, and that is the lack of Direct X 10.1 support, some people claim "It's not worth worry about" - But realistically is this history repeating itself?

For instance, the Geforce 4 MX series was a Direct X 7 part without any shaders, although more powerful than the Geforce FX, it could not play any games which had to use shaders. (Like Oblivion, where the Geforce FX could run it rather well with OldOblivion, if the Geforce 4 MX, had Pixel Shader 1 support, that would have been awesome).

Another Example is the Radeon x8xx series, with there lack of SM3 support, they could very well easily run games like Bioshock, but unfortunatly because of the lack of a certain feature, they have no hope in running the game. (Unless you use Shadershock Shader replacements).
The card itself was a screamer back in the day, more powerful than the x1600 and x1650 series that came after it, and the x850XT PE overclocked out-performed the 7900GS, but unfortunately couldn't run some games, people didn't worry about it back when purchasing the card, and thought SM3 isn't worth it, or won't be needed any time soon, well news flash it did, and it was.

Will History repeat itself?


By winterspan on 6/4/2008 3:02:50 AM , Rating: 2
I don't work in the industry, nor am I an obsessed gamer, but I do follow the technology. And as far as I know, leaving out support for DirectX 10.1 is nowhere near as critical as leaving out DX7 shader support or SM3.0 support. DirectX 10.1 has no major significant technology difference than DirectX 10.0, so I don't see where a situation could occur like the ones you described in the other cases...


By StevoLincolnite on 6/4/2008 7:01:46 AM , Rating: 3
One of the main improvements touted by Microsoft in DirectX 10.1 is improved access to shader resources - In particular, this involves better control when reading back samples from multi-sample anti-aliasing. In conjunction with this, the ability to create customised downsampling filters will be available in DirectX 10.1.

Floating point blending also gets some new functionality in DirectX 10.1, more specifically when used with render targets - New formats for render targets which support blending will be available in this iteration of the API, and render targets can now be blended independently of one another.

Shadows never fail to be an important part of any game title's graphics engine, and Direct3D 10.1 will see improvements to the shadow filtering capabilities within the API, which will hopefully lead to improvements in image quality in this regard.

On the performance side of things, DirectX 10.1 will allow for higher performance in multi-core systems, which is certainly good news for the ever growing numbers of dual-core users out there. The number of calls to the API when drawing and rendering reflections and refractions (two commonly used features in modern game titles) has been reduced in Direct3D 10.1, which should also make for some rather nice performance boosts. Finally, another oft-used feature, cube mapping, gets its own changes which should help with performance, in the form of the ability to use an indexable array for handling cube maps.

One of the major additions which will impact image quality in DirectX 10.1 regards precision, in a couple of different disciplines. Firstly, this revision of the API will see the introduction of 32-bit floating-point filtering over the 16-bit filtering currently on show in DirectX 9 and 10 - This should see improvements to the quality of High Dynamic Range rendering which use this functionality over what is currently available.


"Paying an extra $500 for a computer in this environment -- same piece of hardware -- paying $500 more to get a logo on it? I think that's a more challenging proposition for the average person than it used to be." -- Steve Ballmer

Related Articles
Intel Discusses GPU, Hybrid CPUs
March 17, 2008, 4:55 PM
Update: NVIDIA to Acquire AGEIA
February 4, 2008, 5:31 PM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki