backtop


Print 90 comment(s) - last by Goty.. on Jun 19 at 1:48 AM


NVIDIA GTX 280 3-Way SLI  (Source: NVIDIA)

NVIDIA GTX 280 GPU  (Source: NVIDIA)
The GPUs in the series include the GTX 280 and GTX 260

NVIDIA launched a new family of GPUs today called the GTX 200 series. Within the series there are currently two GPUs -- the GTX 280 and the GTX 260. The NVIDIA GTX 280 is now the flagship GPU from NVIDIA and sits in the line above the 9800 GX2.

NVIDIA is stressing with the new GTX 200 family that the GPUs go beyond gaming and are one of the most powerful processors in a PC and can be used for rendering video and other functions. NVIDIA says that its goals with the architecture of the GTX 200 series were to design a processor twice as powerful as the GeForce 8800 GTX, rebalance the architecture for future games with more complex shaders and more memory, improve efficiency per watt and per square millimeter, provide enhanced CUDA performance, and add a significant reduction in idle power requirements.

NVIDIA says that the GTX 200 line provides nearly a teraflop of computational power. The GTX 200 family also offers support for PhysX powered physics processing right on the GPU. Both the new GTX 280 and GTX 260 support SLI and 3-way SLI. The previous NVIDIA 9800 GX2 could not support 3-way SLI.

Key features in the new GTX 200 GPUs include support for three times the number of threads per flight at any given time. A new scheduler design allows for 20% more texturing efficiency. The memory interface for the GPUs is 512-bit (GTX 280) and full-speed, raster-operation (ROP) frame blending is supported. The GTX 200 series also features twice the number of registers for longer and more complex shaders and IEEE754R double precision floating-point. The GTXC 200 line also supports 10-bit color scan out via the DisplayPort only.

One of the main goals with the GTX 200 line was improved power management. Both the GTX 200 series GPUs have idle power requirements of about 25W; during Blu-ray playback power requirements are around 35W; full 3D performance requirements vary with the most power needed being 236W (GTX 280). The GTX 200 line is compatible with HybridPower, which makes the power needs of the GPU effectively 0W.

The GTX 280 is built on a 65nm process and has 1.4 billion transistors. The stock video cards have a graphics clock of 602 MHz, processor clock of 1,296 MHz, and a memory clock of 2,214 MHz. The GTX 280 has 1GB of GDDR3 and 240 processing cores and 32 ROPs.

The GTX 260 is also built on the 65 nm process and has 1.4 billion transistors. The graphics clock for the GTX 260 is 576 MHz, the processor clock is 1,242 MHz, and the memory clock is 1,998 MHz. The memory interface on the GTX 260 is 448-bit and it has 896MB of GDDR3 memory. The GTX 260 has 192 processing cores and 28 ROPs. The maximum board power is 182W.

Both video cards will support PhysX processing on the GPU. NVIDIA purchased Ageia in early 2008.

GTX 280 video cards will be available tomorrow for $649 and the GTX 260 cards will be available on June 26 for $399.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Just curious
By GDstew4 on 6/16/2008 3:32:07 PM , Rating: 2
Has nVidia mentioned why they chose to stick with 10 and not go with 10.1 for this GPU family?




RE: Just curious
By Doormat on 6/16/2008 3:34:33 PM , Rating: 2
Ars had a quote from nvidia along the lines of "game developers didn't find it useful enough for us to include it in the chip".


RE: Just curious
By gyranthir on 6/16/2008 3:39:30 PM , Rating: 2
http://www.anandtech.com/video/showdoc.aspx?i=3334...

"We support Multisample readback, which is about the only dx10.1 feature (some) developers are interested in. If we say what we can't do, ATI will try to have developers do it, which can only harm pc gaming and frustrate gamers."


RE: Just curious
By JLL55 on 6/16/2008 3:49:30 PM , Rating: 2
As mentioned above, NVIDIA came up with a reason and anandtech said uh, BS. Anandtech further tried to reason out why they would not and they came up with some interesting ideas. I would recommend reading: http://www.anandtech.com/video/showdoc.aspx?i=3334

Its a very long article but very interesting IMHO.


RE: Just curious
By gyranthir on 6/16/2008 4:03:15 PM , Rating: 2
Right but the question was, Has Nvidia mentioned why....

That is their reasoning behind it. Also, I don't really agree with all of Anand's reasoning. Most of that stuff, although cool, isn't terribly useful, and does not improve performance by more than like 1/10 of a percent.

Lots of developers are barely catching up to 10, let alone 10.1 and they have to make them backwards compatible too, which gives them more issues to deal with.


RE: Just curious
By Chadder007 on 6/16/2008 4:34:30 PM , Rating: 2
Was there a game out that supposedly supported 10.1 that ran faster on ATI's cards since they have supported it? I think NVidia (supposedly) then got with the game developers and then patched the game to run only in 10 mode which made the game slower on the ATI cards.
I believe the game was Assassin's Creed.

...found http://www.pcgameshardware.de/aid,645430/News/Ubis...


RE: Just curious
By Polynikes on 6/16/2008 5:15:35 PM , Rating: 2
It was revealed that the reason for the faster performance in DX10.1 was because of the way it was implemented in the game. They basically forgot some stuff so since there was less load it had higher framerates. They got rid of DX10.1 support for the game because they didn't implement it correctly.


RE: Just curious
By Polynikes on 6/16/2008 5:16:26 PM , Rating: 1
Forgot to add: It had nothing to do with influence from Nvidia.


RE: Just curious
By ninjaquick on 6/17/2008 11:41:47 AM , Rating: 2
Give us a quote. I saw some screens and it looked fine. It ran faster on dx10.1 hardware and the incompatible nvidias suffered. So gimme some quotes that prove the move back to dx10 was just a bug fix, and id also like to see the bad code.


RE: Just curious
By ninjaquick on 6/17/2008 11:43:46 AM , Rating: 2
Oh, and dont give me no performance gains without aa, since dx10.1 is supposed to show performance gains with 4xAA on. (its an API hardware requirement)


RE: Just curious
By Polynikes on 6/17/2008 1:44:04 PM , Rating: 2
http://ve3d.ign.com/articles/news/38999/Assassins-...
quote:
However, the performance gains that were observed in the retail version are inaccurate since the implementation was wrong and a part of the rendering pipeline was broken.

quote:
Unfortunately, our original implementation on DX10.1 cards was buggy and we had to remove it.

As for your retarded demand that I give you code, well, I don't work for Ubisoft.


RE: Just curious
By kilkennycat on 6/17/2008 5:40:58 PM , Rating: 2
The reasons quoted by nVidia for not going with Dx10.1 appear to be mere platitudes to calm consumer purchasers. The focus for the release of the GTX280/260 does not appear to be the desktop gamers at all -- if PC gamers with lots of spare cash and the "need to have the best" buy them, it sure helps cover the development cost. And the retail availability 280/260 will fend off competition for that money from ATi. The true focus of the 280/260 is on the non-consumer crowd that needs massively-parallel processing on the desktop -- the Tesla initiative, the CUDA users. No need for any increment in DX10 for these folk... The GTX280/260 removes some very significant time-consuming computational barriers for these users. Notice the double-precision data paths and significant additional on-board processing horse-power. A very rapidly-growing and extremely profitable chunk of nVidia's business. No doubt nVidia has its true next-gen (Dx10.x etc) GPU/GPGPU family in development. Expect to see it some time early next year.

A die-shrink of the 280/260 for either the TSMC 55nm or 45nm process is highly likely before the end of 2008. The new family of GPU/GPGPUs then will no doubt be introduced on the same process. A variant of Intel's "tic-toc" strategy.


"I mean, if you wanna break down someone's door, why don't you start with AT&T, for God sakes? They make your amazing phone unusable as a phone!" -- Jon Stewart on Apple and the iPhone

Related Articles
Update: NVIDIA to Acquire AGEIA
February 4, 2008, 5:31 PM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki