Print 73 comment(s) - last by Trisped.. on Mar 28 at 5:07 PM

The arms race for physics processing has begun

Hot on the heels of NVIDIA announcing its partnership with Havok for GPU-level physics implementations, ATI is saying it too is capable of performing heavy physics computations on its GPUs. There is currently a great deal of focus being placed on how to speed up and implement better physics in games and interestingly AGEIA has been preaching that physics processing belongs on a discrete processor designed to handle just physics and nothing else. This approach is very much similar to the way 3dfx designed its first few successful 3D processors, which didn't do anything except accelerate 3D.

Both ATI and NVIDIA are using the same method of computing physics on their GPUs -- load and process some physics calculations if there's just one GPU, or load and process all physics calculations onto one full GPU if there are two GPUs (Crossfire or SLI). In regards to both methods, the approach is monolithic, meaning that both ATI and NVIDIA prefer to load all things related to graphics onto the GPU. ATI claims that its latest X1900 family has more than enough processing power left sitting idle most of the time to take care of physics and 3D rendering. This is a strong indication that the current state of 3D graphics is far too concerned with frame rate when it should be looking into how best to utilize the chips that ATI and NVIDIA produce.

According to ATI, the ability to process physics exists on both R520 and R580 architectures. The functionality is enabled via software drivers and can be delivered in various ways. ATI says that it will implement a low-level proprietary API that developers can use to pass physics functions too. The proprietary API allows a game to bypass Direct3D or OpenGL completely and communicate with the hardware. However, a developer can still opt to use Direct3D or OpenGL if they choose to.

ATI is also saying that its method for processing physics on the GPU is superior to both AGEIA's and NVIDIA's. According to the company, those who have already purchased any one of the X1800 or X1900 series can rest assured that their investment will last. Using its propriety API, ATI is able to offload physics processing to any GPU in a dual-GPU setup, regardless of whether or not the cards are in Crossfire mode or that they are even from the same family. This way, those who upgrade later can use their existing X1800 or X1900 cards for discrete physics processing while using the newer card for 3D acceleration duties. As of right now, ATI's method appears to offer the best combined benefits of both AGEIA's discrete processing as well as being able to switch between Crossfire, Crossfire + Physics.

Physics processing has only been a hot topic recently, most notably after AGEIA went public with its announcement of the "first PPU." With both ATi and NVIDIA now announcing that they are strong players in physics processing, AGEIA's original intent of "complementing" existing graphics cards is under heavy fire.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

ATi is the best at physics?
By DarthPierce on 3/23/2006 6:46:07 PM , Rating: 2
They don't really seem to explain anything about their ability to make the physics processing use the information for anything other than graphical effects... which is the same problem nvidia's solution faces as opposed to aegia, where the physics can affect gameplay in addition to graphics.

RE: ATi is the best at physics?
By soydeedo on 3/23/2006 7:08:50 PM , Rating: 5
the link given here takes you to the last page of that article. go one or two pages forward to read more details.

personally i think the fact that you can pair up two cards from different families makes all the difference when compared to nvidia and finally i may regret not buying an sli mobo. i think this approach is probably how it will end up just for simplicity's sake.

RE: ATi is the best at physics?
By soydeedo on 3/23/2006 7:16:13 PM , Rating: 4
or you can check the second question down at this link: e/ati_physics/pa...

RE: ATi is the best at physics?
By uop on 3/25/2006 1:40:24 AM , Rating: 2
ATI claim they can do better physics than ageia because they can handle more GFlops (375 for ATI vs. 100 for Ageia).

ATI claim they can do better physics than Nvida because their architecture is much more suitable for that - unified shader units and a 3:1 shader/pipeline ratio, which means they have more spare power left than nvidia.

RE: ATi is the best at physics?
By masher2 on 3/25/2006 2:29:28 AM , Rating: 2
> "ATI claim they can do better physics than ageia because they can handle more GFlops "

I'm sure ATI has more raw horsepower, but there's no way a they're going to beat Ageia in performance. A general-purpose processor wastes too much of its power compared to a custom ASIC.

Second of all, one large part of gaming physics is collision operation thats not floating point intensive, but requires massive memory bandwidth. Ageia is claiming 2 Tb/sec internal bandwidth for Physx, which is far and above what ATIs R580 can manage.

Memory Bandwidth?
By trabpukcip on 3/25/2006 11:55:35 AM , Rating: 3
Did someone not mention earlier that collision based physics (ie that affects gameplay) requires a ton of memory bandwidth? And that the Ageia PPU has about 2TB of memory bandwidth?

The current x1900s have about 50GB of memory bandwidth right
ie 1/40th of the PPU?

I know which one I would buy, it would last a decent ammount of video card generations.

RE: Memory Bandwidth?
By Clauzii on 3/25/2006 9:21:07 PM , Rating: 2
It´s actually 2 Terabit = 250GB/s - 5xGPUs - BUT for physics ALONE!!!

RE: Memory Bandwidth?
By dilz on 3/26/2006 3:01:13 AM , Rating: 2
If a PPU is as bandwidth-hungry as you claim, then yes, there's no reason I should wait on buying one, and I shouldn't expect GPUs to ever be up to the task either. Perhaps this why GPU-based physics processing is be suggested for SLI/Crossfire only? Your information makes the decision to purchasing a discrete PPU a "no-brainer." But this is mere conjecture. Bring on the graphs!

RE: Memory Bandwidth?
By masher2 on 3/26/2006 7:16:07 PM , Rating: 1
> "If a PPU is as bandwidth-hungry as you claim, then yes..."

I made the claim, not him. And to clarify, some aspects of gaming physics are highly bandwidth intensive (e.g. collision detection). Other tasks are not.

RE: Memory Bandwidth?
By lemonadesoda on 3/27/2006 10:23:00 AM , Rating: 2
The nice thing about the bandwidth issue, is that if ALL the objects are WITHING the ageis local memory, it isn't requiring bandwidth on the PCI bus.

You just need to load in the map, particple original coordinates and trajectory vectors, then let the PPU do the rest.

It like saying that the GPU requires incredible bandwidth... and it DOES... on its internal bus... but does not need so much on the AGP/PCIe16 bus.

RE: Memory Bandwidth?
By dilz on 3/27/2006 10:35:18 AM , Rating: 2
Thank you for clarifying my murky ideas... :P

RE: Memory Bandwidth?
By masher2 on 3/27/2006 10:41:48 AM , Rating: 2
Exactly so. The actual computation is bandwidth intensive, but the data transfer from CPU and PPU is not. PCI bus bandwidth is generally sufficient.

This is real trouble for Ageia
By hellcats on 3/24/2006 11:51:53 AM , Rating: 2
A few points:

1) Havok announced they will also support ATI. All they need is SM3.0.
2) Next gen consoles have SM3.0
3) GPUs have > 10x FPU performance than current CPUs
4) NVidia/ATI have much more experience making vector FPU hardware than Ageia
5) Ageia have yet to demonstrate any acceleration by their PPU over the CPU.
6) Havok FX works. Who cares if the GPU wasn't initially designed for physics, if it can do it then it can do it.
7) Ageia's PPU can only be used for physics. A spare or faster GPU can do both graphics and physics. Having additional discreet hardware reduces the chances for full utilization.
8) The installed base of SM3.0 GPUs is much greater than the number of PPUs (which is zero right now I would think). Game developers would rather support the more common platform.

Ageia has based their business on the assumption that you can't accelerate physics with the GPU and therefore there will be a big market for a dedicated "PPU". This assumption has just been shot to pieces. I'd hate to be Ageia's CEO when the VCs call to ask about this Havok FX thing.

RE: This is real trouble for Ageia
By masher2 on 3/24/2006 1:05:55 PM , Rating: 2
> "Physics and graphics processing is fundementally the same"

This really isn't true; there are major differences, even in the subset of "gaming physics" commonly used in most games.

The differences explain why a new, small company like Ageia can provide a PPU that outperforms-- for physics-- the mature GPUs from NVidia and ATI by 500% or more.

RE: This is real trouble for Ageia
By masher2 on 3/24/2006 1:08:47 PM , Rating: 2
Sigh, I replied to the post BELOW this one...this forum software has serious issues...

RE: This is real trouble for Ageia
By Clauzii on 3/24/06, Rating: 0
RE: This is real trouble for Ageia
By masher2 on 3/24/2006 10:47:36 PM , Rating: 2
> "I'd hate to be Ageia's CEO when the VCs call to ask about this Havok FX thing."

All he needs to do is simply point out that Ageia's solution-- while not only being 5X or more faster-- is the only one which allows true physics interaction.

The NVidia/Havok approach is just more eye candy.

RE: This is real trouble for Ageia
By Clauzii on 3/25/2006 9:16:25 PM , Rating: 2
It will be funny to read the forums in the future: Realizm vs. Resolution :)

By lemonadesoda on 3/27/2006 10:17:41 AM , Rating: 2
I think your statement 7) Ageia's PPU can only be used for physics. A spare or faster GPU can do both graphics and physics. Having additional discreet hardware reduces the chances for full utilization. is rather a bold statement!

If ATi/NVidia were designed for GPU, but they can do other stuff through SDK, then I would bet the Ageia SDK is designed to do WHATEVER you want to do with multiple floating point co-processors.

It's basically a flipping huge multi-processor-array of FPAUs. (floating point arithmetic units).

With the right SDK you can:

1./ Do any math comp... stat's software packages move over
2./ Use it for encoding music... Frauenhofer move over
3./ Use it for encoding video... Take a look at ATi' recent effort at speeding up video encoding using their AVIVO (AT Theatre 550) chip. Encoding 5x faster. I'm sure ageia can match if not beat that
4./ Encription algorithms
5./ ZIP/UNzip compression algorithms
6./ Photoshop filters
... the list goes on

Try getting out of the box and thinking a little more creatively.

Same BS as Nvidia is claiming.
By the Chase on 3/23/2006 7:07:32 PM , Rating: 3
The timing is impecable. Both ATI and NVIDIA have been working on physics for a LONG time. Really. And they both thought this just seems to be a good time to come forward with their solutions. So you just need to buy that 2nd $450 video card and you will have the choice to have great graphics(2 cards running the graphics) or OK graphics and ho-hum watered down physics(1 card doing graphics and 1 doing the physics). Sounds like a great solution. (For ATI and NVIDIA's pocketbooks that is).

RE: Same BS as Nvidia is claiming.
By soydeedo on 3/23/2006 7:10:54 PM , Rating: 3
except not many normal people really use sli configs. the point of this is so that when yo upgrade your video card you can still use the old one for physics processing. the only downside is that you might not want to ebay it to trade-up like we do now. =P

RE: Same BS as Nvidia is claiming.
By haelduksf on 3/24/2006 8:00:57 AM , Rating: 2
Or, from the sounds of it, you could get a Crossfire mobo for a few bucks extra, pick up an x1800 or 1900 to do your graphics, and an x1300/1600 to do your physics...which will still be a damn sight cheaper than a $249 PCI-board that can only do physics.

RE: Same BS as Nvidia is claiming.
By masher2 on 3/24/2006 8:10:35 AM , Rating: 1
> "you could get a Crossfire mobo for a few bucks extra...and an x1300/1600 to do your physics"

Even forgetting the cost of a new motherboard, the best graphics cards are only going to have perhaps 1/5 the physics horsepower of Ageia's ASIC. A low end graphics card is going to come in even lower. And let's not forget the Nvidia solution lacks interactive physics...and the ATI solution is, at present, not even at the "vaporware" level.

RE: Same BS as Nvidia is claiming.
By Griswold on 3/24/06, Rating: 0
RE: Same BS as Nvidia is claiming.
By masher2 on 3/24/06, Rating: 0
By TheNeonCowboy on 3/24/2006 7:17:12 PM , Rating: 2

>NVidia is claiming a 5-10X speed, Ageia a 30-100X speedup.
>Both numbers seem reasonable, given the differences
>between the two approaches.
>My "claim" was a simple matter of dividing one by the
>other.5-10X speed, Ageia a 30-100X

Funny ATI's claims to be the fastest and thier only
claming about about 30% speed up... Remember an ATI
gpu X1k has more raw processing powqer. A X1800XT has
2.5X that a 3.0 ghz p4, where as a 7800 GTX has less
power then a 3.0 ghz. Acording to the foldimg at home
gpu client...

I highly doubt you going to see anything near what
agaiea is pimpung unless you have a severaly under
powered PC, with an OLD video card...

>First of all, ATI's approach is, as I said not even
>vaporware yet. It's a "me too" annoucement that smells
>like they were caught wholly off-guards by NVidia.

Agiea announced their PPU 1st

Closely Fallowed By ATU, They motioned
they cards have this capability with the
x1K launch

Loosely fallowed by NVIDIA, last to the table
get your facts stright .. Nvidia how ever did
realese thier "details" a few days earlier. Useally it a game of poker to see who shows thier hand 1st then the other fallows... But what goes on behind closed door don't asume to know. But FYI ATI annouced thier 1st many
months ago and thats what counts.... so if any one's doing
a me too it's nvidia

>NVidia's is certainly vaporware, but given they've
>partnered with an established physics-simulation vendor,
>they've at least started working on a solution.

that happens when your monday morning quaterbacking
trying to chatch up...

>Ageia, however, has working silicon-- and boards. And a >short-list of games that support them. You can't buy >product yet, true, but anyone who attempts to deny they
>are MUCH further along the curve than either ATI or
>is just fooling themselves

Agias had working slicon for a long time.. thier card
her even supposed to have lauched last novemebr.. But
where delayied...

Not many people can afford or will be wiulling to buy botha $600 video card and a $250 PPU. instead about 95% will just buy the powerfull video card. So it seems
agiea has really hurt them selfs by takeing to long to
get to market.

AS for who's ahead that's yet to be seen. Seriosly that come down to the end result and in perforamce vs cost.

By Egglick on 3/24/2006 6:22:32 AM , Rating: 2
Using its propriety API, ATI is able to offload physics processing to any GPU in a dual-GPU setup, regardless of whether or not the cards are in Crossfire mode or that they are even from the same family.

Meaning that if you wanted to, you could get an X1600Pro to go along with your X1900XT, and the X1600Pro would do only physics processing. This would be a good idea for only $110, so if ATI were smart, this is what they would be pushing. Certainly alot better than $275 for the Ageia.

RE: X1600
By the Chase on 3/24/2006 3:21:20 PM , Rating: 2
It would be, but you also have to add at least $100 and probably more like $150 for the Crossfire motherboard unless you already own one. So the cost goes back up to $260. Will the X1600Pro run physics as well as the Ageia card? I highly doubt it.

RE: X1600
By melgross on 3/24/2006 4:21:55 PM , Rating: 2
I think that it would be a good thing for all three to battle it out. And, yes, let the games companies decide which API's they will support. It's more likely that they will have a better feel for what will work best, and support that one. Game players will also decide which one the prefer. That way, the market will decide which is best. The preferred method of choosing standards that are evolving. As always, it will result in a few years of frustration. But, so what? It's the long term results that matter.

I agree with another posters idea that 720 x 480 24FPS video is far more realistic than ANY video game on the market today.

I will take that a step (or two) further, and say that even a second generation VHS copy, running at 200 x 480, with all of its attendant problems is also far more realistic than the highest rez video games. Even black and white is.

Hi rez and high frame rates are a waste of processing power. There is no point of having a game run at 1600 x 1200 and 100 FPS, if it still looks crappy, which they all do.

What is needed to make things realistic, is the use of light source ray trace, and two-way physics processing.

Both of those methods are calling for vast processing power. That power can be supplied at lower rez's and lower frame rates.

If we stuck to a max of 1024 x 768 (approx) by 75FPS, it could be done. The obsession with higher rez's and framerates makes it very difficult for game companies and video chip makers to concentrate on this though.

It's really too bad. I've been saying this for years.

RE: X1600
By Clauzii on 3/24/2006 9:37:10 PM , Rating: 2
U R Not Alone :)

RE: X1600
By masher2 on 3/24/2006 10:51:44 PM , Rating: 3
> "I agree with another posters idea that 720 x 480 24FPS video is far more realistic than ANY video game on the market today...The obsession with higher rez's and framerates makes it very difficult for game companies and video chip makers to concentrate on [realism] though....

I was that poster. And you make a very valid point. The framerate and resolution obsession is hurting gaming now, now helping it.

For a game developer, high-quality physics and intelligent AI means scaling down the resolution...and that's the kiss of death from legions of braindead gamers, who are more interested in benchmark values than realistic games.

Physics Co-processors on Graphics boards
By ninjit on 3/23/2006 8:48:16 PM , Rating: 2
If dedicated physics processing becomes the norm for video games (they way HW T&L is now), I hope companies begin to make integrated boards with both GPU and PPUs on them, I don't like the idea of having to get another card to stick in a system.

RE: Physics Co-processors on Graphics boards
By Devil Bunny on 3/23/2006 9:37:24 PM , Rating: 2
I would much rather have two seperate cards because of bandwidth, theres only so much of it on a PCIx16 slot, and curently the gaming world is trying to suck every last bit out of the slot for a better framrate, youd be better off just buying a PCI card for your PPU and leaving you PCIx16 to your GPU.

RE: Physics Co-processors on Graphics boards
By akugami on 3/24/2006 12:58:52 AM , Rating: 2 DO realize that the bandwith on AGP 8x has not been fully utilized and that PCIe 8x much less 16x has yet to be fully utilized. Unfortunately there aren't any AGP versions of today's top cards but putting video cards in AGP 4x and AGP 8x mode offers very similar performance when using the same card. As far as PCIe goes, there's still a ton of bandwith left to play with.

By sxr7171 on 3/24/2006 1:01:44 PM , Rating: 2
Thank you. It's about time that people realize that all this dual 16x stuff is complete BS.

RE: Physics Co-processors on Graphics boards
By TheNeonCowboy on 3/24/06, Rating: 0
By Clauzii on 3/24/2006 8:11:27 PM , Rating: 2
AGP x8 is 2048 MegaBytes/sek

By dilz on 3/24/2006 6:20:41 PM , Rating: 2
I think GPU makers believe they can handle physics processing because it will give the GPU something else to do without necessarily taxing memory bandwidth that much - assuming that memory bandwidth is the more dominant of GPU-based bottlenecks. Daily Tech readers have mentioned that physics processing would not likely be memory-intensive.

A separate item to upgrade would have distinct positive and negative aspects, many of which have already been mentioned.

The most compelling reason for a discrete solution is in the standardization of an API. On a grand scale, monopolies have given us standardization in the forms of: Windows, DX, Soundblaster, USB, etc.

The fragmentation created by SLI and Crossfire creates a niche for Aegia to place their product perfectly:

"Our solution is platform-independent and will produce consistently better results while leaving your GPU(s) to their own specific tasks."

If GPU manufacturers want to take on the responsibility of physics processing, they will have to agree on a multi-card interface.

That said, I'd love a free boost in gaming from even a single PPU-enabled GPU (dual-core, anyone?).

RE: Memory
By Clauzii on 3/24/2006 9:42:17 PM , Rating: 2
In 2005 they stated Linux too :)

RE: Memory
By dilz on 3/25/2006 1:30:45 AM , Rating: 2
I guess I don't understand what you mean by "Linux" in this case.

RE: Memory
By Clauzii on 3/25/2006 9:17:23 PM , Rating: 2
That the API will be out for Linux too...

RE: Memory
By Clauzii on 3/25/2006 9:18:36 PM , Rating: 2
The post were ment to be a comment, not a reply btw. :)

I dont think the theory is bad
By Plasmoid on 3/23/2006 8:54:34 PM , Rating: 4
At first i admit i thought this was nothing but a gimmick.

But the more i think about the more it makes sense. The cards being brought out by Aegia are not too dissimilar from Graphics cards at the very low levels. In theory at least a graphics card would be much better at physics calculations then a standard cpu.

In addition to this SLI and Crossfire simply havent taken off, the improvements arent there. If they can come up with a system where you buy SLI/crosffire and the computer intelligently decides wheter its better to do physics calculations to boost performance or SLI/crossfire to boost performance you have a pretty good system for removing the cpu bottleneck when neccessary.

Also i saw the price tag on those Aegia units... and realised a 2nd graphics card in the mid-range works out much the same. So if this does a nice job extending the uses of SLI/Crossfire its pretty good, it certainly adds another choice.

But there is a slight problem. How many developers out there now are going to go "Ok, so we have DirectX for the graphics and compatibilty looks good at first glance, now lets do the physics for the nvidia api, the ati api and the aegia api"
If there were some DirectX or OpenGL equivalent that was one single code to suit all hardware it would be fine, but there doesnt seem to be. Havok already have their SDK pushed out to developers and its in games, so the next-gen titles just use the SDK. Aegia have their own... so developers have to choose between one or ther other. The Ati solution is a bit odd... its probably going to be Havok by the sounds of it but unless havok have been planning this carefully it might be choice between nvidia and ati on this front too.

There are just too many things for a developer to be able to meet. In the end of the day i reckon they will embrace Multi-Threaded Code using Dual/Quad Core Cpu's as the standard and maybe consider a push towards one of those options far in the future.

But OpenPL anyone... if DirectX and some sort of Open standard came along offering an api to handle all those physics wihout any vested interests things could be different.

By PrinceGaz on 3/23/2006 8:56:20 PM , Rating: 2
eeek, you read my mind about OpenPL :)

RE: I dont think the theory is bad
By Jep4444 on 3/23/2006 9:06:52 PM , Rating: 2
Aegia will likely get the short end of the stick if nVidia and ATIs physics processing is as good, it'll atleast eliminate one standard

By kilkennycat on 3/23/2006 7:27:31 PM , Rating: 1
With the announcements from Ageia and NVidia, ATi had to come up with something more than their mumblings a few months ago about physics processing...


"ATI is also saying that its method for processing physics on the GPU is superior to both AGEIA's and NVIDIA's."

Not nVidia's -- Havok FX on nVidia hardware. See the Havok website. Havok intend their FX software to port to any Shader3.0 GPU. ( Only the X1xxx series from ATi need apply. Both the 6xxx and 7xxx GPUs from nVidia are Sh3.0 compatible.)

Sour grapes from Ati ?? Don't they need a physics algorithm partner such as Havok or Ageia anyway, so that there is compatibility of the hardware-based offering with the physics software-SDK being offered by these companies to game developers ?

Also, I do not understand claims that the Havok solution cannot influence physics-responses tightly coupled to gameplay -- the PCIe bus is fully bilateral (and the AMD connected Hyper-transport is certainly so). Thus if one graphics card is totally dedicated to physics computations, then it becomes Havok's hardware answer to the Ageia offering, but on PCIe, instead of the useless find-a-spare-PCI-slot-on-a-PCIe motherboard current offering from Ageia.

By lemonadesoda on 3/23/2006 7:49:55 PM , Rating: 4
Since MOST PCIe mainboards ALSO have PCI "legacy" slots, then

MORE USELESS: find-a-spare-PCIe16-slot when you upgrade your GPU. (Mainstream market motherboards have single PCIe16. Full stop).

By Griswold on 3/24/2006 12:39:54 PM , Rating: 2
You didnt understand how Havok's physics engine works, did you? They specifically said any SM3 capable card will run with their technology. So, ATI is not left out in any way - they just felt the need to have their own properietary solution besides general support for HavokFX.

Everything is just graphical effects
By Lotus SE on 3/23/2006 8:18:40 PM , Rating: 2
They don't really seem to explain anything about their ability to make the physics processing use the information for anything other than graphical effects...

Isn't that what a game is? A graphical representation of a physical world.

Everything in a game is graphical trickery, unless you are using forcefeedback (and sound).

If a physics processor could add the ability to simulate water or snow in a way much more realistic than they do today... That would be worth something.

It's all really just about immersion.

Take a look at these two videos and tell me that this wouldn't be a nice addition. Especially if the programmers didn't have to program every minute detail, and were able to let the physics engine do it all. html html

By masher2 on 3/24/2006 7:22:05 AM , Rating: 3
> "Isn't that what a game is? A graphical representation of a physical world."

You miss the point. If the physics API is unidirectional, then the results cannot influence gameplay. Meaning you can only calculate the position and movement of objects that DON'T affect gameplay.

Want to know what happens when your player slides across an oily floor and collides with a few barrels and crates? Sorry, no can do.

By Wwhat on 3/24/2006 12:38:35 PM , Rating: 2
WOW, now that's nice, and rendered in one pass it says, of course not at 60+fps I assume but this is still a good example how water should look like and what we dream of when we read the claims from ageia and competitors.
I wonder if when you combine the ageia api with the GPU physics to spruce it up and a very skilled coder of course you'll end up with something coming close to this kind of graphics.

Only so many pipes to play with...
By The Blue Moose on 3/23/2006 8:41:55 PM , Rating: 1
I don't really see the point behind all this "physics on the GPU" stuff.

Let's not forget how GPU's are organized. They are a collection of independent pixel processors with a shared memory. So, every pipeline you devote to physics is one LESS pipeline processing your pixels. That is why you bought a $400 card (or 2) in the first place is it not?

The structure of GPU's makes them suitable for a variety of highly parallel tasks. A concept typically referred to as GPGPU. When I'm not playing a game, I wouldn't mind my GPU helping me out with some video encoding, or ray tracing, etc. But, in a gaming situation, I want all of my pipelines pumping pixels.

Besides, with AMD talking to ClearSpeed, we could be seeing CPU's with a HUGE boost in FP ability in 1.5-2 years. It will probably take almost that long for there to be more than a few games with support for Ageia's card anyway. So, about the time PPU's actually become useful, we won't need them anymore.

Though I suppose programmers will always find some use for all of that FP power. GPPPU anyone?

By Targon on 3/24/2006 12:19:26 AM , Rating: 2
There is a difference between the NVIDIA and ATI approaches to multiple video cards in the same system.

NVIDIA requires that you use two of the same video card when you use SLI, and even then, the drivers MUST support the application/game in order to properly work.

ATI allows different video cards to work together in Crossfire, as long as they are fairly recent(X1xxx series or above). So you can have an X1800 and an X1300 working together to accelerate an image. Drivers only need to support Crossfire in the first place, but don't require application specific code. Note that at the high end, you DO need at least one master card at the moment, but that requirement will go away in time as Crossfire evolves.

The advantage to the ATI method is that when you upgrade your video card to the latest and greatest, you have the option to use your old card for the physics. With NVIDIA, since you need to have two of the same card to even work together, unless NVIDIA figures out how to link together two different generation cards via their SLI implementation, it may not be possible. So you can't have a geforce 8800(or whatever they call it) combined with a geforce 7800 currently in SLI, and who knows if the physics would let you do it.

On a single GPU machine, having a seperate card to do physics would be better, but nothing would stop you from tossing a low-end video card in as a secondary if it's able to handle the physics demands of the application.

RE: Only so many pipes to play with...
By masher2 on 3/24/2006 7:25:01 AM , Rating: 2
> "So, every pipeline you devote to physics is one LESS pipeline processing your pixels. That is why you bought a $400 card (or 2) in the first place is it not?"

No. I think most people bought a graphics card to enhance their gaming experience, not simply to 'pump pixels'.

A Hollywood movie running at 720x480 @24 fps is considerably more realistic than any game at 1600x1200 @100 fps. If people can give up some raw pixel power for more realism in game physics, thats a tradeoff any reasonable person would be willing to make.

By Clauzii on 3/24/2006 8:09:33 PM , Rating: 2
I think you said this before :)

I still total agree!!

We need OpenPL
By PrinceGaz on 3/23/2006 8:54:54 PM , Rating: 3
The problem with Aegia, nVidia, and ATI each developing solutions which allow physics-calculations to be offloaded from the CPU is that they are incompatible with each other and will require game developers to include a seperate code-path for each, along with a "software-mode" which does the work on the CPU. Those code-paths have to be individually developed and tested which will significantly increase the workload of the development team. It reminds me of the bad old days when games had to individually include support for Rendition cards, PowerVR cards, Voodoo cards etc and patches for problems with the different versions were commonplace. We don't want to go back there.

What is needed is a unifying interface, which I'll call OpenPL (along the lines of OpenGL and OpenAL). Unless something like OpenPL happens which allows any physics-hardware available to be utilised through a common-interface, it is probably doomed to failure or will be restricted to a subset of games on a subset of hardware.

RE: We need OpenPL
By shabby on 3/23/2006 10:13:03 PM , Rating: 2
Ati's and nvidia's solution is based on sm3, all ageia has to do is write a sm3 driver and it'll work with the havok api.
Ati/nvidia on the other hand cant do that since the physx api is proprietary.

RE: We need OpenPL
By Clauzii on 3/24/2006 9:01:36 PM , Rating: 2
From Aegias site:

PhysX API™
- Complex rigid body object physics system
- PhysX FX smart particle system
- Volumetric fluid creation and simulation
- Cloth and clothing authoring and playback
- Advanced character control
- Ray-cast vehicle dynamics
- Support for arbitrary number of physics scenes with fine user control of threads

PhysX Runtime™
- Completely integrated solver– fluids, particles, cloth and rigid bodies interact correctly
- Exploits parallelism at every level from the core solver to scene management
- Supports the advanced debugging capability of the PhysX VRD
- Hardware abstraction eliminates the need to understand the internals of next-gen hardware
- Optimized for single and multi-core PC, Xbox 360, Playstation 3 and the PhysX processor
- Hardware acceleration on the PC development environment enables true cross-platform author once, deploy everywhere functionality

There You go..

catch up
By poohbear on 3/23/2006 10:16:06 PM , Rating: 2
man, why's ATI always 1-2 steps behind nvidia!? first it was SM3.0, then the dual vid cards, and now physics engines. Seems like they're always playing catchup.:/

RE: catch up
By Fenixgoon on 3/24/2006 1:31:25 AM , Rating: 2
dual cards (as in a GTx2, not SLI gt's) are teh $$. same with SLI. what nvidia did was produce a superior product, plain and simple. ATI returned with the x1900 for the high end, but for mid range it still looks like nvidia is king.

RE: catch up
By Jep4444 on 3/24/2006 10:04:16 AM , Rating: 2
id argue that ATI is ahead of nVidia when it comes to physics, their physics implementation is loooking more advanced and complete so even if it comes out later, it'll be worth it if it performs better(no evidence to suggest either will come out first though)

nVidia even acknowledged the use of running different types of video cards with one of them processing physics yet nVidia hasn't shown any signs of being able to do so

well of course..
By fliguy84 on 3/24/2006 3:39:30 AM , Rating: 2
well of course ati can do it too. nvidia can do sli, so does ati with crossfire. nvidia can do purevideo, ati can do avivo. nvidia can do The Way It's Meant to be Played, ati can do Get in the Game. and the list goes on.... (dual slot cooler, amd mobo chipset etc) :)

RE: well of course..
By CyNics on 3/24/2006 5:01:47 AM , Rating: 2
ATI. please. show us the goods and show it now.
don't just talk big and launch it 6 months later with a new core. (etc. crossfire..big stories and they launched twice)

RE: well of course..
By Clauzii on 3/24/2006 8:33:08 PM , Rating: 2
ATI also has the high amount of shaders that make space for .some. physics calculations, and when ATI switches to 65nm, they might be ready with, like, 64 or 96..

At the moment Physics processing is at it´s dawn - in hardware anyway:
Ageia has spend ~4 years in developing hardware, software etc. for physics ALONE. I don´t know the clockrate of the card (~600MHz would be my bet), but on their site, Ageia is showing an example, using "at least 24 parallel calculating units, each with 6kB local storage..." so something will be in there :)

ATI and nVidia on the other hand are using the halfway solution, by using their current GPU technoligies, which might contain physics stuff or not, but as long as it´s not the only thing going through the shaders, they WILL present a slower solution. (nVidia at the moment anyway, considering 24 to ATIs 48 shaders)

With Ageia, I also like the idea of getting it in PCI since most still have AGP and PCI in their machines. And the fact that it will be released for the PCIe bus later this year, makes Ageias approach VERY tempting to me, including the fact it might be possible to do general FPU on it for rendering animation etc.

Bright tomorrow, Bright yesterday, Happy today.

What a joke
By MrHanson on 3/23/2006 8:35:07 PM , Rating: 3
Just what we need, more fragmentation in the PC gaming industry. Do seriously think that PC game developers are going to spend the time and money to develope PC games with support for all three solutions? (Ageia, ATI, and Havok/Nvidia)? The claims from ATI and Nvidia are ridiculous. I would rather use my expensive Video card for what it was made for, rendering 3d graphics in ultra high resolutions. I think Ageia's is the best solution and more economical.

Error in news post
By Trisped on 3/28/2006 5:07:10 PM , Rating: 3
ATI has been talking about physics computatins done through their video cards then the AGEIA solution since before they released their x1k cards. Yet, the news post claims:
Hot on the heels of NVIDIA announcing its partnership with Havok for GPU-level physics implementations, ATI is saying it too is capable of performing heavy physics computations on its GPUs.

The way it is presented makes it look like ATI is trying to play catch up to Nivida, when in fact it is the other way around.

By PrinceGaz on 3/23/2006 8:48:06 PM , Rating: 2
The X1000 series of cards from ATI all have three times as many pixel-shader units as they do ROPs which seems like over-kill at least for current generation games, but I guess that those pixel-shader units could certainly be put to good use with physics-calculations. Perhaps they designed the X1000 series with this sort of thing in mind.

By osalcido on 3/23/2006 9:48:25 PM , Rating: 2
Seems to me that AGEIA has the advantage here as they have built their PPU from the ground up to do physics processing

ATi and Nvidia however are merely tacking it onto their existing technology

By jconan on 3/24/2006 2:05:18 AM , Rating: 2
i guess it'll speed up the physics in revolution or 360 if it's supported and it's native to the console?

A bit conflicted
By cubakreash on 3/24/2006 12:04:54 PM , Rating: 2
At first, I felt that Ageia's solution would have provided a great solution, but now I'm not so sure. Physics and graphics processing is fundementally the same. If I'm not mistaken, one of the main functions that makes a graphics card 3D is its ability to calculate the positioning of an object before rendering the graphics. It should go without saying that this is taking away from the processing power that could be used to enhance the graphics quality on screen.

The latest trend in processing has obviously been parallel processing to speed up various functions. Using that as a starting point, it seems to me that ATI and NVIDIA are possibly moving to dual core GPU with a third core being a PPU. In this setup, the GPUs can offload the physics processing to the PPU and concentrate and pumping out pixels. Since all cores would be integrated on the same die, the communication speed between them should suffer less than if the PPU were placed on its own PCB. Once all the calculations have been completed, the data is then combined finalizing the image displayed.

Of course, this is just my theory, but considering the purpose for physics and my understanding of 3D processing, I feel it only make sense for ATI and NVIDIA to be going in this direction.

X1Ks can do good math
By firewolfsm on 3/24/2006 4:10:17 PM , Rating: 2
i heard something about people using X1K cars to do math calculations, this makes sense.

"And boy have we patented it!" -- Steve Jobs, Macworld 2007
Related Articles

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki