backtop


Print 41 comment(s) - last by Builder15.. on Nov 29 at 2:35 PM

AMD says GPU physics is dead until DirectX 11

PC gamers have been looking for more than just pretty graphics when it comes to their game titles over the last few years. Not only do gamers want realistic graphics, but they want realistic physics as well.

Physics have long been a part of PC and console games to some extent. As games get more complex the mathematical calculations required for accurately rendering things on screen like smoke and explosions gets more complex as well.

GPU makers ATI and NVIDIA both know the value of physics processing and both companies put forth similar ways to tackle physics for video games. DailyTech reported in January of 2007 leaked specifications from ATI showing exactly what would be required for its asymmetric physics processing. Almost a year before those documents were leaked, DailyTech reported on NVIDIA’s Quantum physics engine.

Things in the world of video game physics heated up when Intel announced in September that it intended to buy Havok, the company whose physics software is widely used by game developers around the world. Xbit Labs reports today that AMD’s Developer Relations Chief, Richard Huddy is saying that GPU physics is dead for now.

The reason Huddy is saying GPU physics is dead is that Havok, now owned by Intel, is said to be releasing its Havok FX physics effects engine that is responsible for computing GPU physics without support. That is assuming Havok doesn’t abandon the Havok FX engine at all. DirectX 11 is projected to support physics on the GPU and it may be the release of DirectX 11 before we see GPU physics processing. This should be great new to the ears of AGEIA, who recently announced it would be developing a mobile physics processor.

Exactly how this will affect mainboards that NVIDIA already has in development remains to be seen; the replacement to the NVIDIA 680i mainboard is said to have three PCIe slots. If one of those slots was slated for use in GPU physics is unknown, however, this could be why the 680i replacement was pushed from a launch date of mid-November as was rumored to have been the scheduled launch date.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

My Thoughts
By Egglick on 11/21/2007 6:56:24 PM , Rating: 5
AMD ought to enter into a partnership with Ageia, and start piggybacking Ageia's physics processors onto their top videocards.

Not only would this give AMD another major selling point, but it would allow Ageia to get mass market penetration. It would also solve the problem of interface bandwidth for the physics (since it would be sharing an x16 slot), and be fairly cheap because it could share the card's ultra fast memory (especially if it has 1GB). I don't think either of these things would put a damper on GPU performance.

Both sides benefit, and the customers benefit too. win/win/win




RE: My Thoughts
By mmatis on 11/21/07, Rating: -1
RE: My Thoughts
By Egglick on 11/21/2007 7:13:30 PM , Rating: 5
part·ner·ship n.
1. A relationship between individuals or groups that is characterized by mutual cooperation and responsibility, as for the achievement of a specified goal


RE: My Thoughts
By chrispyski on 11/21/07, Rating: -1
RE: My Thoughts
By TheCurve314 on 11/21/2007 9:49:56 PM , Rating: 5
He's talking about AGEIA, not Havok.


RE: My Thoughts
By RMSe17 on 11/21/2007 10:08:22 PM , Rating: 5
Oh no, they should not. Having another card, or a non-free addon to high end cards is not beneficial to consumer. If they merge the technology into all video cards, like HD video decoding, then maybe. Maybe. Probably still no, because it will detract from better video card potential.

Meanwhile we have 4 core CPU, and 8 core around the corner. And how many of those cores are used in an average game? With the rapid quadrupling of processor power, the decision that makes the most sense is to allocate one core (at least for now, 2 later, etc) to physics processing. The result is a lot more affordable to consumer, and the benefits will be a lot more widespread.

I do not think people want to see another 300$ hardware component become a needed addon for amazing game experience. Nor do people want to pay extra 200$ for a video card that has extra physics chip on it.


RE: My Thoughts
By Spartan Niner on 11/22/2007 3:59:04 AM , Rating: 2
While we're at it, why don't we integrate CPU/GPU/PPU functionality into one product? That isn't too far from AMD's vision of CPU/GPU melding into one. That would be of ultimate benefit to producer AND consumer because it would merge all three technologies together. In the days of multi-core processing we're well within reach of such a feat in the next decade.

For now working on melding the CPU/GPU is a bit risky for AMD, but I earnestly hope that future developments will bear their strategy out and, most importantly, bring them profit (yay competition, yay lower prices!).


RE: My Thoughts
By TSS on 11/22/2007 4:11:56 AM , Rating: 2
because the use of the PPU has not been proven yet. the few games that do support ageia's ppu right now only support it to make stuff look better, such as more flying debris. until it can actually be utilized in such a way that it becomes a gameplay necessity, then it'll be usefull. if it's a contribution that is, not a detraction.

besides the fact that a physics processing unit is useless for joe average save for use in games. it could definatly speed up 3dsmax as well (in case of particles it would come as a godsend really) but as soon as you drop those 2.... well.. i don't need to see the old paperclip start bouncing with "realistic physics"...


RE: My Thoughts
By goku on 11/23/2007 6:39:37 AM , Rating: 2
And what's worse is that GPU physics only adds unnecessary particle effects and doesn't actually change the gameplay in a meaningful way. Until they can get physics that will affect gameplay onto the GPU, the ageia physx card is the only way for a significant jump in interactivity.


RE: My Thoughts
By Egglick on 11/22/2007 4:54:47 AM , Rating: 4
The majority of the costs involved in producing a Physics card are occupied by:

1. Memory
2. PCB
3. Processor
4. Cooling

If you piggyback a Physics chip onto a high-end videocard, you eliminate the costs of Memory, PCB, and Cooling because all of these things are already existent on a videocard, and can be shared.

The only additional costs involved would be for the Physics processor itself, and the initial reengineering of the above components. I think you're grossly overestimating the costs involved. I don't want to throw out a number, but I think the percentage increase for high-end cards would NOT be substantial. Especially with Ageia wanting to gain market penetration.

As far as your arguments against physics processors in general, your points are valid, but I don't know if CPUs are best suited for the task. As we can clearly see with GPUs and graphics, specialized processors can be optimized to be exponentially faster at a specific task than a general-purpose CPU. They don't need to maintain backwards compatability, and can utilize their own programming language best suited to that particular task. Compare an 8800GTX to software rendering on a CPU.


RE: My Thoughts
By shabby on 11/22/2007 8:29:26 AM , Rating: 3
You make it sound like its so easy to piggyback the ppu onto a highend gpu.
First you would need some kind of bridge chip so the ppu and gpu could communicate, the pcb complexity would go up even further, add an extra 2 maybe 4 layers to an already high 8-12 layer board. With highend gpu's already 9 inches long, putting two extra chips would add at least another inch maybe two in lenght.

Now add memory chips for the ppu, or let it use the ones the gpu uses. But this would take away memory from the gpu, so you'd have to add more memory onto the pcb. 512 for the gpu and another 128/256 for the ppu.

Then when you finally have a finished gpu/ppu package what are you left with? An expensive gpu with useless ppu that no game uses.
Unless microsoft puts physics into directx no one will dare to do this. I believe dx11 will have physics built in, by then ati/nv will add that feature into its gpu's and ageia will go away like the dodo bird.


RE: My Thoughts
By Egglick on 11/23/2007 1:15:32 AM , Rating: 2
I'm not sure that a GPU and Physics processor would need to do a whole lot of communicating. Maybe for the purpose of sharing memory and the PCIe x16 slot, but I'm not knowledgeable enough to get into the technical details of this. It could be as simple as adding a Theater200 chip to the AIW boards, or it could be more involved. You'd have to ask someone with a degree in this field. I'm simply suggesting a plausible theory which looks to have many benefits.

Keep in mind that the next gen high-end cards will likely have 1GB or more, so allocating ~128MB to a physics chip should have minimal impact on performance. I also don't think enthusiasts interested in high-end cards will mind a little extra length if the benefit is large enough.

quote:
Then when you finally have a finished gpu/ppu package what are you left with? An expensive gpu with useless ppu that no game uses.
Most games don't make use of physics processors because there isn't a large enough install base to make it worth their effort. It's an obvious chicken vs egg argument. One of the biggest benefits to doing something like this would be to get market penetration for physics processors. It could be just what we need to get things moving.


RE: My Thoughts
By semo on 11/22/2007 6:19:14 AM , Rating: 1
a dedicated physics chip is best for physics just as a gpu is for graphics. even the flop machines that are gpus these days can't yield nearly as good physics as an ageia card.

a cpu core or two have no chance.


RE: My Thoughts
By FNG on 11/22/2007 2:54:55 PM , Rating: 2
I agree wholeheartedly and I think AMD and Intel do as well. I thought they were talking about including more specialized instructions to handle video, etc... in the CPU instruction set. Couldn't physics be handled through the addition of instructions and some CPU optimization?


RE: My Thoughts
By StevoLincolnite on 11/22/2007 11:16:09 PM , Rating: 2
Or... Hows about Intel, AMD and nVidia make they're on-board solutions be able to handle the physics if you put in a dedicated card? - Wont have to spend money on something extra then.


RE: My Thoughts
By murphyslabrat on 11/27/2007 3:28:41 PM , Rating: 3
Oh, so $300 for the CPU is affordable to the average customer? I am sorry, the average customer gets a POS Celeron and uses onboard graphics. The average customer goes nowhere near quad-core.

If you are talking about the average enthusiast customer, then no. Adding a $50 premium to your video card is much more affordable than doubling the cost of your CPU.


RE: My Thoughts
By NullSubroutine on 11/22/2007 5:53:05 AM , Rating: 3
I believe your best bet is to have the newer R700 designs (which are supposedly now 45nm 72mm cores) with one of the cores being a AGEIA PPU. I think having 3-4 R700 cores + 1 Ageia PPU would work out pretty good.


RE: My Thoughts
By Dabruuzer on 11/22/2007 9:41:02 AM , Rating: 2
There are many interesting possible outcomes to these physics developments. Like the one you suggested, they can lead to some exciting products. However (IMO), until there is a unifying SDK that all developers can use to implement physics, it is going to be another "choose sides" issue. Why have Ageia hardware support if your favorite current/upcoming game only supports Havok. And so on... Now maybe with DX11 apparently providing that, we will be a little safer in our hardware choices and their compatibilities in the coming years.

Of course, I think the whole area of physics engines/hardware can eventually have a huge impact on games and how much immersion and enjoyment we can get out of them. So here's to advancement! ;)


Intel!
By Slaimus on 11/21/2007 7:06:36 PM , Rating: 2
Intel owns Havok now. That is the main reason AMD's support is being pulled.




RE: Intel!
By Haltech on 11/21/2007 8:01:41 PM , Rating: 2
definition of Developer Relations Chief: Diss your competition while providing support to your allies :)


RE: Intel!
By Lonyo on 11/21/2007 9:09:27 PM , Rating: 2
Is that why Intel still has Crossfire support with Intel chipset motherboards?


RE: Intel!
By vortex222 on 11/21/2007 9:48:40 PM , Rating: 2
its because nvidia does not support intel (or other brands for that matter) in SLI.

If not for AMD GPU's who would buy an Intel motherboard for MultiGPU systems?

Conversely if AMD did the same thing that nVidia does, they could kiss most of there GPU sales goodbye and we would see an upswing in nVidia MB and GPU sales.


RE: Intel!
By Strunf on 11/23/2007 6:49:21 AM , Rating: 2
hmm there's plenty of people buying Intel crossfire capable boards that will never use crossfire this cause the board offers other options that make it worthy.


What exactly is Havok FX?
By i4mt3hwin on 11/21/2007 6:23:14 PM , Rating: 2
I always thought Havok FX utilized SM3.0 to do physics that aren't necessary in games. Things like smoke, water and effects. I never thought it was actually used to do calculations for objects. I also thought that, CUDA allowed programmers to write C based physics engines that could be used in unison with a game. So that you have 1/4 SP to the physics calculations and 3/4 to the rendering of the game.

So what exactly is going on? And current 680i have 3 PCI-E slots, it almost seems like GPU physics have been dead from the start? I also guess Hellgate and London had it's Havok FX cut from it... http://hellgate.incgamers.com/forums/showthread.ph...





RE: What exactly is Havok FX?
By Anosh on 11/21/2007 6:48:41 PM , Rating: 2
I dunno about the sm3.0 part but it does sound like nvidia and amd are pulling their support for havok since intel went and pulled the carpet right from under them.


RE: What exactly is Havok FX?
By Lonyo on 11/21/2007 8:08:59 PM , Rating: 3
AFAIK there were 2 versions of Havok, the regular CPU based one, and the one which supported SM3.0 based calculations.
It seems like all the Havok licenses being sold are only for the regular software version, and they aren't giving out support for the SM3.0 version.
Makes sense from an Intel point of view, they have nothing to gain from developing a version of Havok which uses SM3.0, since they don't have any cards which would really work with it.


Best thing Ati and Nvidia could have done.
By MrKaz on 11/22/2007 5:33:00 AM , Rating: 3
This is not a big deal.

Psychics will be supported through Microsoft DirectX 11.

In the end any advantage that Intel might have by having bought Havoc will disappear, since DX11 will dictate the rules. And developers will use the obvious route of using DX11.
Havoc could have an advantage if they keep developing to the consoles where games when ported will probably use their api.
However I don’t know about Opengl...




By winterspan on 11/23/2007 2:42:16 AM , Rating: 2
"Psychics will be supported Through Microsoft DirectX 11"..
Wow! That's news to me! So will the supporting games be capable of telling you how bad you suck before you even play the game? :)


3rd GPU Physics slot
By kuyaglen on 11/21/2007 7:49:52 PM , Rating: 2
At GeForce Lan 4, Nvidia said that the 3rd pci-e x16 slot is not going to be for physics but for Tri-Sli.

http://tmc.kuyaglen.com/lans/geforcelan4/gfl4_081....

http://tmc.kuyaglen.com/lans/geforcelan4/gfl4_083....




RE: 3rd GPU Physics slot
By RMSe17 on 11/21/2007 10:10:28 PM , Rating: 2
Good!


Gpu physics wont work
By xNIBx on 11/23/2007 5:49:48 AM , Rating: 3
It is simple people. Current and future games will always stress the gpu to the fullest. Almost all games are gpu dependent. Almost no games can fully utilise multicore cpus.

We have tons of cpu power going underutilised while the gpus are fully utilised. Therefore, it is a lot more logical to have the cpu do the physics, isnt it?

And what is the best physics engine/api for cpus atm? Havok's one. Intel stroke gold with this move. They automatically get almost the entire physics market and they can optimise it for their cpus.




RE: Gpu physics wont work
By goku on 11/23/2007 7:05:14 AM , Rating: 1
The reason for this is because going from 70% to 100% utilization of the CPU will have a minimal increase in potential game interactivity, not to mention there are quite a few systems with malware and other crap running in the background, sucking up CPU power, requiring devs to account for that. Since having a non optimized system with programs running in the background will affect available CPU and memory resources, it won't affect GPU resources; the only concern would be outdated drivers but even then it's a nonissue for most people. Therefore spending more time on fully utilizing the GPU will mean that two similarly configured systems won't have large differences in performance depending on whether or not said machine is "clean".

A PPU is another one of those resources that are task specific and since running vista or F@H or some virus/malware application in the background generally won't affect its performance, optimizing for the PPU will mean that you can expect similar performance across various configurations with this part in common.


By kilkennycat on 11/23/2007 1:46:50 PM , Rating: 2
Extract from a recent interview with Cervat Yerli (CEO, Crytek ) and nVidia's Roy Taylor:

quote:
Crysis uses a developed physics system. There are attempts to calculate physics systems with the GPU. Are Crytek and NVIDIA going that way?


quote:
NVIDIA_Roy: Let me answer generally and then specifically


quote:
Generally we believe that the GPU can stand by itself as a powerful processor more than capable of accelerating advanced physics for today's and future games. The GPU lends itself well to scaleable, violent or destructable physics. What we need is an industry standard API that developers and the community can get behind, that isn't proprietary. Ideally the developer can then select the GPU or other processor as they see fit. We don't have one today, and this is something we are looking into.


quote:
Specifically, with regard to CryEngine 2, we are in discussions with the team about this but can't add more right now


Crysis uses an in-house developed Physics engine and nVidia has been very closely tied-in with Crytek for a long time, providing advanced hardware and close technical support.

nVidia is actively working on the successor-family to the 8xxx GPUs. This new family is intended to fully support both GPU and GPGPU applications with the same silicon - it is expected to include full double-precision data paths plus other GPGPU-oriented enhancements. PCIe 2.0 provides enhanced data-bandwidth between the GPU(s) and the central processor, which may help better share (for example) bulk-physics calculations between the GPU and the CPU core(s). Particle-physics effects will no doubt remain in the province of the GPU(s).

nVidia is also rapidly evolving the CUDA toolset for GPGPU applications with their current GPUs and in the process of merging the CUDA driver with their current graphics driver -in preparation for the next gen.

You can bet that Crytek and nVidia will both attempt to stretch whatever new hardware nVidia produces long before its release to the general public. And nVidia is not adverse at all to publicly releasing proposed new API specifications without any price-tag for usage.




By Sureshot324 on 11/25/2007 12:53:18 PM , Rating: 2
I think PhysX has a good chance of succeeding, because they are essentially offering the software version of their API for free. A game can use the PhysX platform to handle all the in-game physics without the need to have a PhysX card. It just won't work as well as having an actual PhysX card.

A lot of video game developers are gonna choose PhysX over Havok just because PhysX is free and Havok cost money to license. Epic used the PhysX API for UT3, and the Unreal engine is by far the most popular licensed engine out there. Just check the wikipedia articles on all the major engines and see the list of games.

Once there is a large base of games out there using the PhysX API, buying a PhysX card is going to become a lot more attractive.


By ShadowZERO on 11/22/2007 1:39:19 AM , Rating: 2
From this article on July 25th:
"Where's The Physics: The State of Hardware Accelerated Physics"
http://www.anandtech.com/showdoc.aspx?i=3048&p=1

quote:
The second reason, and that which has the greater effect, is a slew of technical details that stem from using Havok FX. Paramount to this is what the GPU camp is calling physics is not what the rest of us would call physics with a straight face. As Havok FX was designed, the physics simulations run on the GPU are not retrievable in a practical manner, as such Havok FX is designed to be used to generate "second-order" physics. Such physics are not related to gameplay and are inserted as eye-candy. A good example of this is Ghost Recon: Advanced Warfighter, which we'll ignore was a PhysX powered title for the moment and focus on the fact that it used the PhysX hardware primarily for extra debris.


So according to this, GPU physics via the Havok FX can't handle first order physics, such as things before the scene is rendered by the GPU. Eye candy only you say? I think I'll pass on all this and wait for REAL hardware accelerated physics, such as that shown in this video.

http://www.youtube.com/watch?v=d3TU65KaPXI

Too bad AGEIA's PhysX technology is doing so poorly, probably part of the reason that game was scrapped. Still, I won't likely buy into any kind hardware physics until I can get my hands on a dedicated PPU with decent market support.




By R3MF on 11/22/2007 3:30:16 AM , Rating: 2
yeah baby.




Multi-Core GPU/CPU/PPU for sure
By BSMonitor on 11/27/2007 4:06:30 PM , Rating: 2
The Intel behemoth cannot be stopped.

Know what else is interesting to note. Guess who would love nothing more than to see a Multi-Core CPU/GPU/PPU from Intel??

Need help?

Could make a hell of happy "green" iMac with one of those.




Main issues
By jwarrent on 11/27/2007 9:06:43 PM , Rating: 2
I have to agree with the DX11 route as well. It seems there are a few main issues.

- Unless a developer can assume with certainty all user hardware will support their physics (consoles), we will never see hardcore physics truly integrated into gameplay. Can you imagine dying in a multiplayer match from a crumbling building you couldn't see due to your hardware? Graphics are one thing, but physics are.. well, physical.

- DX is obviously popular because it's a standard. If DX can support multiple types of hardware for handling physics, developers might not have to worry about it near as much. We can then all see that crumbling building smash you to bits.

- With the GPU architecture, physics processing is even more attractive because it can be done in parallel, especially with the latest hardware like the 8800's. I believe that would be a nagging issue for attempting to handle physics on a CPU, no matter how many cores. But any graphically intensive game wants as much of the video card as it can get. It's going to be a long time before that hardware is somehow more than we need.

As a side note, as 3D spices up our OS's more and more, I think there is plenty of room for physics to be introduced, from interactive windows, icons, the trash can, directory and file manipulation, to solitaire. It's all meaningless visual stimulation.. but that's a big reason the iphone works for people- it's fun!




Have you seen the latest
By Builder15 on 11/29/2007 2:35:34 PM , Rating: 2
It looks like AMD might try to by Ageia. I think Intel should buy them and lock up the physics.




DX11
By iFX on 11/23/2007 10:14:55 AM , Rating: 1
I'm still on DX9 hardware... Yeeshe.




Dedicated
By ethana2 on 11/26/2007 9:11:20 PM , Rating: 1
I don't /want/ physics. I don't /want/ graphics. I want 256 cores that I can do with as I frigging please. I think it's high time we stopped thinking 'CPU' and 'GPU'.

They need to be the Serial Processing Unit and the Parallel Processing unit. Paradigm shift. If I have 64 small cores running at 200MHz, why does my media player use my 2.8 GHz core? Why does Blender do it's ray tracing and baking on my 2.8 GHz core? Why does protein folding use my 2.8 GHz core? Video encoding? Real time disc decryption? tar.gz compression and de-compression? What the heck?

...and why does all that power need a /driver/ just to work? Do you need a /driver/ to use a dual core CPU? No-- you just enable SMP. It should be that simple.




"It looks like the iPhone 4 might be their Vista, and I'm okay with that." -- Microsoft COO Kevin Turner














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki