Print 41 comment(s) - last by Builder15.. on Nov 29 at 2:35 PM

AMD says GPU physics is dead until DirectX 11

PC gamers have been looking for more than just pretty graphics when it comes to their game titles over the last few years. Not only do gamers want realistic graphics, but they want realistic physics as well.

Physics have long been a part of PC and console games to some extent. As games get more complex the mathematical calculations required for accurately rendering things on screen like smoke and explosions gets more complex as well.

GPU makers ATI and NVIDIA both know the value of physics processing and both companies put forth similar ways to tackle physics for video games. DailyTech reported in January of 2007 leaked specifications from ATI showing exactly what would be required for its asymmetric physics processing. Almost a year before those documents were leaked, DailyTech reported on NVIDIA’s Quantum physics engine.

Things in the world of video game physics heated up when Intel announced in September that it intended to buy Havok, the company whose physics software is widely used by game developers around the world. Xbit Labs reports today that AMD’s Developer Relations Chief, Richard Huddy is saying that GPU physics is dead for now.

The reason Huddy is saying GPU physics is dead is that Havok, now owned by Intel, is said to be releasing its Havok FX physics effects engine that is responsible for computing GPU physics without support. That is assuming Havok doesn’t abandon the Havok FX engine at all. DirectX 11 is projected to support physics on the GPU and it may be the release of DirectX 11 before we see GPU physics processing. This should be great new to the ears of AGEIA, who recently announced it would be developing a mobile physics processor.

Exactly how this will affect mainboards that NVIDIA already has in development remains to be seen; the replacement to the NVIDIA 680i mainboard is said to have three PCIe slots. If one of those slots was slated for use in GPU physics is unknown, however, this could be why the 680i replacement was pushed from a launch date of mid-November as was rumored to have been the scheduled launch date.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

By kilkennycat on 11/23/2007 1:46:50 PM , Rating: 2
Extract from a recent interview with Cervat Yerli (CEO, Crytek ) and nVidia's Roy Taylor:

Crysis uses a developed physics system. There are attempts to calculate physics systems with the GPU. Are Crytek and NVIDIA going that way?

NVIDIA_Roy: Let me answer generally and then specifically

Generally we believe that the GPU can stand by itself as a powerful processor more than capable of accelerating advanced physics for today's and future games. The GPU lends itself well to scaleable, violent or destructable physics. What we need is an industry standard API that developers and the community can get behind, that isn't proprietary. Ideally the developer can then select the GPU or other processor as they see fit. We don't have one today, and this is something we are looking into.

Specifically, with regard to CryEngine 2, we are in discussions with the team about this but can't add more right now

Crysis uses an in-house developed Physics engine and nVidia has been very closely tied-in with Crytek for a long time, providing advanced hardware and close technical support.

nVidia is actively working on the successor-family to the 8xxx GPUs. This new family is intended to fully support both GPU and GPGPU applications with the same silicon - it is expected to include full double-precision data paths plus other GPGPU-oriented enhancements. PCIe 2.0 provides enhanced data-bandwidth between the GPU(s) and the central processor, which may help better share (for example) bulk-physics calculations between the GPU and the CPU core(s). Particle-physics effects will no doubt remain in the province of the GPU(s).

nVidia is also rapidly evolving the CUDA toolset for GPGPU applications with their current GPUs and in the process of merging the CUDA driver with their current graphics driver -in preparation for the next gen.

You can bet that Crytek and nVidia will both attempt to stretch whatever new hardware nVidia produces long before its release to the general public. And nVidia is not adverse at all to publicly releasing proposed new API specifications without any price-tag for usage.

By Sureshot324 on 11/25/2007 12:53:18 PM , Rating: 2
I think PhysX has a good chance of succeeding, because they are essentially offering the software version of their API for free. A game can use the PhysX platform to handle all the in-game physics without the need to have a PhysX card. It just won't work as well as having an actual PhysX card.

A lot of video game developers are gonna choose PhysX over Havok just because PhysX is free and Havok cost money to license. Epic used the PhysX API for UT3, and the Unreal engine is by far the most popular licensed engine out there. Just check the wikipedia articles on all the major engines and see the list of games.

Once there is a large base of games out there using the PhysX API, buying a PhysX card is going to become a lot more attractive.

"When an individual makes a copy of a song for himself, I suppose we can say he stole a song." -- Sony BMG attorney Jennifer Pariser

Most Popular Articles5 Cases for iPhone 7 and 7 iPhone Plus
September 18, 2016, 10:08 AM
Laptop or Tablet - Which Do You Prefer?
September 20, 2016, 6:32 AM
Update: Samsung Exchange Program Now in Progress
September 20, 2016, 5:30 AM
Smartphone Screen Protectors – What To Look For
September 21, 2016, 9:33 AM
Walmart may get "Robot Shopping Carts?"
September 17, 2016, 6:01 AM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki