backtop


Print 41 comment(s) - last by Builder15.. on Nov 29 at 2:35 PM

AMD says GPU physics is dead until DirectX 11

PC gamers have been looking for more than just pretty graphics when it comes to their game titles over the last few years. Not only do gamers want realistic graphics, but they want realistic physics as well.

Physics have long been a part of PC and console games to some extent. As games get more complex the mathematical calculations required for accurately rendering things on screen like smoke and explosions gets more complex as well.

GPU makers ATI and NVIDIA both know the value of physics processing and both companies put forth similar ways to tackle physics for video games. DailyTech reported in January of 2007 leaked specifications from ATI showing exactly what would be required for its asymmetric physics processing. Almost a year before those documents were leaked, DailyTech reported on NVIDIA’s Quantum physics engine.

Things in the world of video game physics heated up when Intel announced in September that it intended to buy Havok, the company whose physics software is widely used by game developers around the world. Xbit Labs reports today that AMD’s Developer Relations Chief, Richard Huddy is saying that GPU physics is dead for now.

The reason Huddy is saying GPU physics is dead is that Havok, now owned by Intel, is said to be releasing its Havok FX physics effects engine that is responsible for computing GPU physics without support. That is assuming Havok doesn’t abandon the Havok FX engine at all. DirectX 11 is projected to support physics on the GPU and it may be the release of DirectX 11 before we see GPU physics processing. This should be great new to the ears of AGEIA, who recently announced it would be developing a mobile physics processor.

Exactly how this will affect mainboards that NVIDIA already has in development remains to be seen; the replacement to the NVIDIA 680i mainboard is said to have three PCIe slots. If one of those slots was slated for use in GPU physics is unknown, however, this could be why the 680i replacement was pushed from a launch date of mid-November as was rumored to have been the scheduled launch date.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: My Thoughts
By RMSe17 on 11/21/2007 10:08:22 PM , Rating: 5
Oh no, they should not. Having another card, or a non-free addon to high end cards is not beneficial to consumer. If they merge the technology into all video cards, like HD video decoding, then maybe. Maybe. Probably still no, because it will detract from better video card potential.

Meanwhile we have 4 core CPU, and 8 core around the corner. And how many of those cores are used in an average game? With the rapid quadrupling of processor power, the decision that makes the most sense is to allocate one core (at least for now, 2 later, etc) to physics processing. The result is a lot more affordable to consumer, and the benefits will be a lot more widespread.

I do not think people want to see another 300$ hardware component become a needed addon for amazing game experience. Nor do people want to pay extra 200$ for a video card that has extra physics chip on it.


RE: My Thoughts
By Spartan Niner on 11/22/2007 3:59:04 AM , Rating: 2
While we're at it, why don't we integrate CPU/GPU/PPU functionality into one product? That isn't too far from AMD's vision of CPU/GPU melding into one. That would be of ultimate benefit to producer AND consumer because it would merge all three technologies together. In the days of multi-core processing we're well within reach of such a feat in the next decade.

For now working on melding the CPU/GPU is a bit risky for AMD, but I earnestly hope that future developments will bear their strategy out and, most importantly, bring them profit (yay competition, yay lower prices!).


RE: My Thoughts
By TSS on 11/22/2007 4:11:56 AM , Rating: 2
because the use of the PPU has not been proven yet. the few games that do support ageia's ppu right now only support it to make stuff look better, such as more flying debris. until it can actually be utilized in such a way that it becomes a gameplay necessity, then it'll be usefull. if it's a contribution that is, not a detraction.

besides the fact that a physics processing unit is useless for joe average save for use in games. it could definatly speed up 3dsmax as well (in case of particles it would come as a godsend really) but as soon as you drop those 2.... well.. i don't need to see the old paperclip start bouncing with "realistic physics"...


RE: My Thoughts
By goku on 11/23/2007 6:39:37 AM , Rating: 2
And what's worse is that GPU physics only adds unnecessary particle effects and doesn't actually change the gameplay in a meaningful way. Until they can get physics that will affect gameplay onto the GPU, the ageia physx card is the only way for a significant jump in interactivity.


RE: My Thoughts
By Egglick on 11/22/2007 4:54:47 AM , Rating: 4
The majority of the costs involved in producing a Physics card are occupied by:

1. Memory
2. PCB
3. Processor
4. Cooling

If you piggyback a Physics chip onto a high-end videocard, you eliminate the costs of Memory, PCB, and Cooling because all of these things are already existent on a videocard, and can be shared.

The only additional costs involved would be for the Physics processor itself, and the initial reengineering of the above components. I think you're grossly overestimating the costs involved. I don't want to throw out a number, but I think the percentage increase for high-end cards would NOT be substantial. Especially with Ageia wanting to gain market penetration.

As far as your arguments against physics processors in general, your points are valid, but I don't know if CPUs are best suited for the task. As we can clearly see with GPUs and graphics, specialized processors can be optimized to be exponentially faster at a specific task than a general-purpose CPU. They don't need to maintain backwards compatability, and can utilize their own programming language best suited to that particular task. Compare an 8800GTX to software rendering on a CPU.


RE: My Thoughts
By shabby on 11/22/2007 8:29:26 AM , Rating: 3
You make it sound like its so easy to piggyback the ppu onto a highend gpu.
First you would need some kind of bridge chip so the ppu and gpu could communicate, the pcb complexity would go up even further, add an extra 2 maybe 4 layers to an already high 8-12 layer board. With highend gpu's already 9 inches long, putting two extra chips would add at least another inch maybe two in lenght.

Now add memory chips for the ppu, or let it use the ones the gpu uses. But this would take away memory from the gpu, so you'd have to add more memory onto the pcb. 512 for the gpu and another 128/256 for the ppu.

Then when you finally have a finished gpu/ppu package what are you left with? An expensive gpu with useless ppu that no game uses.
Unless microsoft puts physics into directx no one will dare to do this. I believe dx11 will have physics built in, by then ati/nv will add that feature into its gpu's and ageia will go away like the dodo bird.


RE: My Thoughts
By Egglick on 11/23/2007 1:15:32 AM , Rating: 2
I'm not sure that a GPU and Physics processor would need to do a whole lot of communicating. Maybe for the purpose of sharing memory and the PCIe x16 slot, but I'm not knowledgeable enough to get into the technical details of this. It could be as simple as adding a Theater200 chip to the AIW boards, or it could be more involved. You'd have to ask someone with a degree in this field. I'm simply suggesting a plausible theory which looks to have many benefits.

Keep in mind that the next gen high-end cards will likely have 1GB or more, so allocating ~128MB to a physics chip should have minimal impact on performance. I also don't think enthusiasts interested in high-end cards will mind a little extra length if the benefit is large enough.

quote:
Then when you finally have a finished gpu/ppu package what are you left with? An expensive gpu with useless ppu that no game uses.
Most games don't make use of physics processors because there isn't a large enough install base to make it worth their effort. It's an obvious chicken vs egg argument. One of the biggest benefits to doing something like this would be to get market penetration for physics processors. It could be just what we need to get things moving.


RE: My Thoughts
By semo on 11/22/2007 6:19:14 AM , Rating: 1
a dedicated physics chip is best for physics just as a gpu is for graphics. even the flop machines that are gpus these days can't yield nearly as good physics as an ageia card.

a cpu core or two have no chance.


RE: My Thoughts
By FNG on 11/22/2007 2:54:55 PM , Rating: 2
I agree wholeheartedly and I think AMD and Intel do as well. I thought they were talking about including more specialized instructions to handle video, etc... in the CPU instruction set. Couldn't physics be handled through the addition of instructions and some CPU optimization?


RE: My Thoughts
By StevoLincolnite on 11/22/2007 11:16:09 PM , Rating: 2
Or... Hows about Intel, AMD and nVidia make they're on-board solutions be able to handle the physics if you put in a dedicated card? - Wont have to spend money on something extra then.


RE: My Thoughts
By murphyslabrat on 11/27/2007 3:28:41 PM , Rating: 3
Oh, so $300 for the CPU is affordable to the average customer? I am sorry, the average customer gets a POS Celeron and uses onboard graphics. The average customer goes nowhere near quad-core.

If you are talking about the average enthusiast customer, then no. Adding a $50 premium to your video card is much more affordable than doubling the cost of your CPU.


"Well, there may be a reason why they call them 'Mac' trucks! Windows machines will not be trucks." -- Microsoft CEO Steve Ballmer














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki