backtop


Print 64 comment(s) - last by crazymike.. on Jun 22 at 3:56 PM

No games are being announced that supports ATI's method, says AGEIA

Earlier this week at Computex, DailyTech reported that ATI officially announces its solution to physics processing. Called Triple Play, ATI's solution depends on three Radeon X1K series cards, two of which operate in CrossFire mode while a third card is configured for physics processing. The Triple Play solution, says ATI, uses the raw gigaflop performance of the Radeon X1K series to process physics, but users are concerned at the approach. The fact that customers are forced to buy three ATI boards ended up being questionable for many users as costs quickly escalate.  A system with two Radeons can still use one for physics calculations, but it is no longer dubbed Triple Play.

FiringSquad this week reported a response from AGEIA which attempts to explain the lack of value in ATI's solution. According to AGEIA, measuring the performance of physics processing by simply looking at the number of gigaflops in a GPU is analogous to saying that "the more wheels I have on my car, the faster I will go." AGEIA's vice president of marketing,  Michael Steele, said to FiringSquad:
  • Graphics processors are designed for graphics. Physics is an entirely different environment. Why would you sacrifice graphics performance for questionable physics? You’ll be hard pressed to find game developers who don’t want to use all the graphics power they can get, thus leaving very little for anything else in that chip.
  • “Boundless Gaming” is actually enabled by AGEIA’s Gaming Power Triangle in which the PhysX processor adds true physics to the mix instead of leaving it to a repurposed graphics processor.
AGEIA further says that developers are announcing more and more games that support its PhysX product, while no one is announcing support for ATI's method. Steele also mentioned that while he's glad that ATI has agreed that physics is important, ATI is delivering a "questionable" solution to physics processing.

Steele also emphasized that PhysX is available now while ATI's solution is not.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Dubious statement...
By Rolphus on 6/9/2006 6:18:19 AM , Rating: 2
"AGEIA further says that developers are announcing more and more games that support its PhysX product, while no one is announcing support for ATI's method."

Right, so no-one's using Havok. I know HavokFX is a new iteration of the tech and all that, but I really can't see that people are less likely to use something as well-established as Havok in favour of Ageia's solution (whose name I've temporarily forgotten).




RE: Dubious statement...
By Strunf on 6/9/2006 6:30:15 AM , Rating: 2
Agree, more so when nVIDIA is also working with Havok.


RE: Dubious statement...
By Xeeros on 6/9/2006 6:41:27 AM , Rating: 2
Honestly I think Ageia had problems to begin with

A) No Customizeable options yet.
B) Performance hit on CPU
C) PCI card that costs $299 ... PCI bus is not exactly future proof and for $300 I think it should at least still be good for another 2 years not if they keep PCI slots that long it will be good.

So lets see a card taht sits off the PCI-E bus and of course a 1K series ATI card can be loads cheaper if needed as a cheap solution. Side note most newer mobo's i've been viewing have less and less pci slots and when deciding between a X-fi sound card or physics I think most would go with teh X-fi.


RE: Dubious statement...
By AndreasM on 6/9/2006 8:21:15 AM , Rating: 3
quote:
Right, so no-one's using Havok. I know HavokFX is a new iteration of the tech and all that, but I really can't see that people are less likely to use something as well-established as Havok in favour of Ageia's solution (whose name I've temporarily forgotten).


1. HavokFX costs a lot of money to license. PhysX is free, so even hobbyists have access to it.

2. HavokFX burdens your GPU, causing lower framerates in some cases. This doesn't apply if one has a extra GPU just for physics, but in that case you would have been better off selling your old GPU and buying a PPU instead.

3. As your post stated, game support. PhysX is out there now, and people have been programming for it for over a year. HavokFX hasn't been released yet, so it'll be a long wait for games that support it.

4. HavokFX is effects physics, i.e. eye-candy. In other words, HavokFX is just more of the same. I for one am looking forward to when games really start utilising PhysX and real gameplay altering physics.


RE: Dubious statement...
By Strunf on 6/9/2006 8:56:25 AM , Rating: 2
1. And ? ATI NEVER said only Havok would be able to use their technology actually its the other way around, they say ANY game developer may use it... be it hobbyist or not

2. So is the case of the Ageia PPU... and the Ageia PPU still costs 299$ which is a lot more than the average graphics card.

3. ATI says that it's not that hard to make a game do some of the physics work with the help of the graphics, actually they even made some in lab tests with Counter Strike Source (if I'm not mistaken).


RE: Dubious statement...
By AndreasM on 6/9/2006 11:32:24 AM , Rating: 3
1. And ? ATI NEVER said only Havok would be able to use their technology actually its the other way around, they say ANY game developer may use it... be it hobbyist or not

I was actually responding to the OP about why someone would choose PhysX over HavokFX. But as far as I know, the SDK Ati is planning on releasing (in addition to supporting HFX) will be restricted to Radeons, which would make it pretty much useless.

2. So is the case of the Ageia PPU... and the Ageia PPU still costs 299$ which is a lot more than the average graphics card.

While it's true that more physics objects will increase the GPU load by making it draw more stuff, with HFX the GPU needs to both draw the new physics objects and calculate their physics. Obviously this makes HFX a heavier burden. ATM the PPU is a bit expensive, but it's because of the new-tech premium. An average GPU doesn't have this, so with time the price difference will shrink.

P.S. If you're talking about Ghost Recon Advanced Warfighter, don't. It has a shitty implementation where they use Havok for all the physics and PhysX for some extra effects.

3. ATI says that it's not that hard to make a game do some of the physics work with the help of the graphics, actually they even made some in lab tests with Counter Strike Source (if I'm not mistaken).

With access to the sourcecode of CS:S, porting it to PhysX would be a minor task (assuming Havok is as easy to use as PhysX is), and porting it from Havok->HavokFX would probably be even easier. OTOH, adding GPU-assisted physics acceleration to CS:S without the source code sounds surreal, lab or not. ;)


RE: Dubious statement...
By mendocinosummit on 6/9/2006 11:36:35 AM , Rating: 2
Don't foreget that a PPU is a very new tech and that adds $$$. I bet we will see a $75 to $100 drop in the next year and half and PEx4.


RE: Dubious statement...
By Strunf on 6/9/2006 1:22:18 PM , Rating: 2
If nVIDIA and ATI are both working with Havok, chances are that their SDK will be close, and Directx 10 will probably set some standards, ATI also showed up some benchs with nVIDIA cards so they have tested their solution on nVIDIA cards as well.

With H:FX one card takes cares of the physics the other of the graphics, soo the burden is close to exactly the same has using on graphics card + PPU.

Shitty implementation or not, the facts speak for themselves.

I can find where I've read that about CS:Source so forget what I said, however I’m pretty sure Valve is doing some homework when it comes to Physics, not just Valve but ID, Crytek and all the others that DIDN’T say they were going to support Ageia, and CryTek already showed up a new demo with some advanced physics without mentioning Ageia…


RE: Dubious statement...
By Trisped on 6/12/2006 2:59:51 PM , Rating: 2
Why do people keep saying ATI is working with Havok? I don't remember reading anything that even implied that. I have also searched ATI.COM and found that the ONLY mention of Havok on the whole site (including press releases) was for a piece of modeling software.


RE: Dubious statement...
By MrKaz on 6/9/2006 11:13:04 AM , Rating: 2
2. HavokFX burdens your GPU, causing lower framerates in some cases. This doesn't apply if one has a extra GPU just for physics, but in that case you would have been better off selling your old GPU and buying a PPU instead.

And Ageia solution improves frame rates, yeah right...


Sigh...
By AndreasM on 6/9/2006 11:39:41 AM , Rating: 2
quote:
And Ageia solution improves frame rates, yeah right...


In case this is about GRAW (again):

quote:
Havok Physics (on the CPU) is used for all game-play physics in both the multiplayer and single-player PC versions of the game. All persistent collidable objects in the game are simulated using Havok software technology running on the CPU.


quote:
AGEIA PhysX had to be layered on top of Havok to extend the physics effects beyond that which could be achieved with CPU only. Imagine what you’ll see in tomorrow’s games in which all resources can be dedicated to PhysX without the hinderance of a software physics engine that runs on general purpose hardware.


http://www.firingsquad.com/features/ageia_physx_re...


RE: Sigh...
By Goty on 6/9/2006 3:00:56 PM , Rating: 2
So is everyone forgetting about the Cell Factor issues where the game runs just as well (or should I say just as badly?) with or without a PPU?

As far as I'm concerned, the AGEIA chip is currently nothing more than a waste of money for no performance gain in the best case, and a fairly large performance decrease for very little improvement in other areas.


RE: Sigh...
By AndreasM on 6/9/2006 4:26:08 PM , Rating: 2
quote:
So is everyone forgetting about the Cell Factor issues where the game runs just as well (or should I say just as badly?) with or without a PPU?


Do you know if this is with the new version of Cellfactor? Ageia probably pulled the previous version for a good reason.

http://www.ageia.com/physx_in_action/cellfactor.ht...

quote:
As far as I'm concerned, the AGEIA chip is currently nothing more than a waste of money for no performance gain in the best case, and a fairly large performance decrease for very little improvement in other areas.


I agree, there is no point in buying one now (though it could be fun to just mess around with the SDK). But I feel confident this will change when UT2k7 comes out. :)


RE: Dubious statement...
By Fenixgoon on 6/9/2006 1:15:48 PM , Rating: 2
Did you see how badly the GRAW framerates dropped once a PhysX card was added? Anandtech's very own benches prove how horrendous a framerate hit GRAW takes with a PhysX card. I'd rather have an extra 20+ fps versus more realistic physics.


physics
By Targon on 6/9/2006 7:37:11 AM , Rating: 3
Considering that physics doesn't offer ANYTHING that can't be done by the CPU these days, I think this is just a case of a company that is in a "developing area" being afraid that they won't survive due to competition.

In order for a physics processing card to do well, it will need to:

1) Improve overall gameplay without causing a negative impact on performance.

2) Be affordable.

3) Get used in enough game titles for there to be a desire on the part of the potential customer base to drive purchases.


Now, when 3D acceleration first came out, it made it so games looked better, and software mode looked ugly in comparison because game companies didn't even try to provide that level of graphics in software. That made an accelerated game obviously better with a 3D card than without. It stood out, and the improved graphics that 3D acceleration brought also improved the framerates as well.

Take the new physics processors we are seeing today. We havn't seen where you get a faster game experience, or one that looks better while also keeping framerates at the same level. This means that until we see a real reason to spend money, not "it could do this", there is ZERO reason for a customer to buy a physics accelerator today.

There was a period where 3Dfx saw some competition from other companies, but either the implementation wasn't as good, or the other companies just couldn't find developers willing to support their version of 3D acceleration. In either case, 3D eventually became a standard feature on video cards, and 3Dfx sank and eventually got scooped up by NVIDIA.

Physics processing isn't an area that any one company has gotten a true dominant position since the only game that I've heard about ran slower with a PPU than without it. It's in the same category of "4 wheel steering" in cars. Every now and then, a car company will try to re-introduce the concept, it will fail miserably, and then fade away. Even if it's a good idea, if it doesn't get support or BUYER interest, it won't go anywhere. If you bundle it with something else that other people might want, then you have a better chance of the technology taking off.




RE: physics
By RMSe17 on 6/9/2006 8:09:17 AM , Rating: 2
I agree with Targon here. Given the strength of current processors, and the move to dual core, I think it would be much more optimised for the game developers to focus on multithreading of games, maybe use one of the cores for physics calculations, etc. And maybe optimize some game code. I remember back in the day when games were made well, written in mix of assembler and C.
Sometimes I wonder if there are chunks of games written in java these days, or else the code was pumped out without any optimizations.

Of course it is easier (and smarter?) to write fat code, and then get users to shell out more money on extra hardware.

Personally I hope that these physics acceleration cards dont take off, because if they do, soon enough there will hardly be a game that wont use one.


RE: physics
By MadMax on 6/9/2006 8:21:05 AM , Rating: 3
The PhysX PPU *DOES* offer physics processing far beyond what the CPU can offer. Even a dual or quad core CPU doesn't have the parallel processing power that a dedicated PPU does. And this processing power does improve the gaming experience.

And I'm tired of people complaining about lower frame rates. Frame rates drop because the visual effects improve- there are more objects to draw onscreen.

Take the game FEAR for example. FEAR has some fantastic graphical effects that improve the visuals of the game, and definitely improve the in-game experience. But these visuals lower the frame-rates of the game because they further tax the video card.

Now, would you "cry foul" and complain to Monoligh (the developer) that they decreased the frame-rate by adding all these nice visuals to the game? Would you play the game with the effects turned off?

The answer is NO. Physics processing adds more objects and visual flare to games, so of course the frame rate will drop! And physics promises to add so much more. The real problem is that no developer wants to go out on a limb and release a game that integrates extensive physics throughout the game, and not visuals only. It will take time for this market to take-off, but it WILL.

MadMax


RE: physics
By hwhacker on 6/9/2006 8:48:16 AM , Rating: 2
I agree that Targon made some excellent points.

The thing about about the PhysX PPU is not if or if not it can calculate physics better than any other current solution, it's the fact that enabling it's ability MOST times drops the frame-rate so badly in games that it makes resolutions that were playable unplayable with the options on, with currently little actual benefits from games we've currently seen. It's not worth it on multiple, multiple levels.

Now, could it change with better utilization in upcoming games? Sure. Could drivers be improved? Sure. Could Ageia produce a better product down the line as this tech progresses, and could this be physics (particularly Ageia's API's) teething stage? Sure.

Like what was said though, the fact BOTH Nvidia/ATi are working with Havok...I'd be a little scared if I were Ageia...Like "who's going to buy us out" scared. All of their points are rather useless and in some spots rediculous, as i'm sure we'll see Havok-optimized games shortly...and perhaps with playable frame rates/better effects before Ageia because of the higher processing ability! On top of that, the fact Nvidia and especially ATi are putting support on their current cards for physics when using a next-gen part for gfx is enough to give them a default audience; they're already a step in the door.

I could pick apart every rediculous Ageia comment, but I like the "Who would sacrifice horsepower for physics...?" comment. Do they not understand ATi's plan is for you to use OLDER cards when you upgrade to a NEWER one, that would not be compatible with the newer one for xfire? IE a x1k card when grabbing a R600, or a x1600-x1800 when using a x1900.

All of their comments sound either deceiving, CYA, or just plain ignorant. I know this sounds like i'm trying to bash them, but they are sounding uninformed and border-line idiotic. I expect them to be chased out of the scene, if not eatten, rather quickly. I give them credit for pushing technology forward though, they do deserve that.


RE: physics
By Strunf on 6/9/2006 8:59:40 AM , Rating: 2
Scared of being bought ? na that's probably their goal...


RE: physics
By walmartshopper on 6/9/2006 4:12:18 PM , Rating: 2
Maybe somebody else can expand on this, but it seems like physics cards should be able to work in conjuction with the new instancing feature of shader model 3.0, which according to Microsoft, "Allows many varied objects to be drawn with only a single command." As we've established, more physics means rendering more objects for the graphics card and therefore lower fps. But in some cases such as explosions, when you are dealing with a bunch of similar objects, couldn't instancing be used to draw the extra objects without slowing down performance much? Would this be left to game developers? Or could Ageia implement this in their API?

Maybe I'm wrong, but this seems like a fairly simple solution to the performance problem.


RE: physics
By AndreasM on 6/9/2006 4:46:17 PM , Rating: 2
quote:
Maybe somebody else can expand on this, but it seems like physics cards should be able to work in conjuction with the new instancing feature of shader model 3.0, which according to Microsoft, "Allows many varied objects to be drawn with only a single command." As we've established, more physics means rendering more objects for the graphics card and therefore lower fps. But in some cases such as explosions, when you are dealing with a bunch of similar objects, couldn't instancing be used to draw the extra objects without slowing down performance much? Would this be left to game developers? Or could Ageia implement this in their API?


Instancing would help some, but the objects would still need to be run through shaders, which are starting to be the bottleneck in GPUs. Using it is up to game developers, though.


RE: physics
By freon on 6/9/2006 10:01:01 AM , Rating: 2
It has already been proven the extra graphics load is NOT what causes the frame-rate drop. Aegia is the only one who tried to use that BS excuse. They have since tried to make updates and optimizations to the drivers to get rid of that performance issue.


RE: physics
By Targon on 6/10/2006 7:40:07 AM , Rating: 2
3Dfx was on the leading edge of 3D acceleration, so it DID start out better. The fact that the other 3D accelerators were worse doesn't change that.

The fact that the 3Dfx Voodoo chip improved the framerates is a key reason why we see 3D acceleration as a standard feature since most people will NOT accept very poor framerates just for eye candy. Going from 100fps to 60fps would work, but not from 60fps to 20fps.

If the current physics processors really improved gameplay, it would mean more than if it's just eye candy. Better visuals(so you can see details like switches/controls from a distance) for example is something people might pay for. If it's just for better explosions, then it wouldn't be. If you could take down a wall with an RPG and climb on top of the rubble to get a tactical advantage, or to advance through the game might be useful, but again, if the frame rate drops to 5fps, it won't do well.

Anti-aliasing is a similar type of thing. If you turn it on and it drops your framerates too much, it's NOT something you will turn on during normal play. It has taken us a LONG time to get to the point where graphics cards are powerful enough to allow AA to be enabled during normal gameplay on mid level video cards, but we are finally at that point.



RE: physics
By tygrus on 6/11/2006 9:07:39 PM , Rating: 2
The current PhysX drivers and game implementations still use the CPU. New drivers and game edtions will make better use of the PPU resources and balance it against frame rates. Look at the performance & quality changes using newer drivers for nVidia and ATI GPU. The GPU makers may have to increase the geometry processing to show the advantages of greater detail and interactions.


RE: physics
By ElFenix on 6/9/2006 9:47:47 AM , Rating: 2
"Now, when 3D acceleration first came out, it made it so games looked better, and software mode looked ugly in comparison because game companies didn't even try to provide that level of graphics in software. That made an accelerated game obviously better with a 3D card than without. It stood out, and the improved graphics that 3D acceleration brought also improved the framerates as well. "
actually, frames didn't improve to start with. Matrox Mistake and ViRGE anyone?


RE: physics
By TheDoc9 on 6/9/2006 10:30:15 AM , Rating: 2
I remimber the mistake! I hade one! What a classic pos card. Compared to a 3dfx card of the day, it was a joke.


RE: physics
By jkostans on 6/9/2006 10:54:08 PM , Rating: 2
I had an onboard s3 virge.... what a joke. The virge was dubbed a "3d-decelerator". I think it's fair to compare the ageia Physx to the s3 virge in terms of usability. I'll stick with "software" rendering until something worthwhile comes out.


2 not 3
By tapa on 6/9/2006 8:50:57 AM , Rating: 2
You don't need 3 cards to have ATI physics. You need only two, one of which could as well be a x1600 or a cheaper 80nm update thing. But if you want crossfire + physics, you naturally need 3 cards.




RE: 2 not 3
By HDBanger on 6/9/2006 9:27:07 AM , Rating: 2
"ATI's announces the latest perk of CrossFire"
"Two's company, three's a crowd -- that is a saying around these parts. ATI is looking to make "three" the magic number when it comes to physics on desktop PCs."

Nope, you need 3 ati cards with two running in crossfire, Nowhere does it say it will work with just 2 cards..


RE: 2 not 3
By HDBanger on 6/9/2006 9:30:37 AM , Rating: 2
Even says right in this article above..

"ATI's solution depends on three Radeon X1K series cards"


RE: 2 not 3
By KristopherKubicki (blog) on 6/9/2006 9:33:32 AM , Rating: 2
Well ATI's TriplePlay requires three cards. I do think he is correct in saying you can use one card for graphics and one for physics if you want, but the material for this hasn't surfaced yet.


RE: 2 not 3
By Marlowe on 6/9/2006 9:39:09 AM , Rating: 2
I've read elsewhere that this is true.. no need for 3 cards..


RE: 2 not 3
By Marlowe on 6/9/2006 9:45:33 AM , Rating: 2
Like from the ATI press release..
http://ir.ati.com/phoenix.zhtml?c=105421&p=irol-ne...
quote:
CrossFire offers gamers a choice of physics configurations rather than being locked into symmetrical setups. This flexible architecture allows asymmetrical configurations as unlike cards can be used for physics processing in both 1+1 and 2+1 setups where one or two graphics cards are used for game rendering, while another card is used for physics. This open architecture accommodates all gamers, whether they want to use a high-end graphics card for physics, or a mainstream card.


RE: 2 not 3
By Marlowe on 6/9/2006 10:00:27 AM , Rating: 2
Also, as far as I know, Microsoft is working on a API for physics acceleration.. like they've done for 3D.. so PhysX/Havok may be competitors at the moment, but I bet it'll end up the same way as with OpenGL and DirectX


RE: 2 not 3
By AndreasM on 6/9/2006 4:39:00 PM , Rating: 2
quote:
Also, as far as I know, Microsoft is working on a API for physics acceleration.. like they've done for 3D.. so PhysX/Havok may be competitors at the moment, but I bet it'll end up the same way as with OpenGL and DirectX


They are:
http://members.microsoft.com/careers/search/detail...

This part is especially interesting:

quote:
You will be a member of the core engine team who will be primarily responsible for working closely with our Direct3D team, helping to define, develop and map optimized simulation and collision algorithms onto data structures that are optimized for the GPU.


I think it's possible that DirectPhysics will support CPU, GPU and PPU acceleration. This would be the best solution IMO. Ageia has stated in interviews that they are prepared to add support for a physics API if Microsoft comes up with one; this would pretty much kill off Havok though, as they would become redundant.


RE: 2 not 3
By gersson on 6/9/2006 10:09:50 AM , Rating: 2
Yep, You can use 1 card for vid 1 card for physics or 2 vid and 1 physics.

In my rig, I'd just have to add an x1600 for $130 -- way cheaper than the aegeia card.

Physics would be mostly visual but then again, who wants to pay for something they can hardly notice?


Ati's solution
By mushi799 on 6/9/2006 3:12:58 PM , Rating: 2
...i'll have to buy three cards. LOL, uhmm no thanks




RE: Ati's solution
By tigen on 6/9/2006 5:18:36 PM , Rating: 2
lol no, you can use two if you want. lots of dense people on this board...


RE: Ati's solution
By tigen on 6/9/2006 5:23:20 PM , Rating: 2
Actually this dailytech article is still pretty misleading, with all the talk about 3 cards and being "forced" to get them. Why would people be upset about being ABLE to use 3 cards?


RE: Ati's solution
By mushi799 on 6/9/2006 7:39:04 PM , Rating: 2
no dude, you need 3 card for tripleplay, hence the name triple in the "tripleplay"


RE: Ati's solution
By mushi799 on 6/9/2006 7:44:06 PM , Rating: 2
nvm, i see where the two came from. I don't why dailytech is still posting 3 card-requirement


3 cards?
By farlander on 6/9/2006 10:50:36 AM , Rating: 2
I thought it will work in either 2 or 3 cards setup: one for graphics and one for physics, and two for graphics, one for physics. No?




RE: 3 cards?
By KristopherKubicki (blog) on 6/9/2006 11:04:25 AM , Rating: 2
It can be used in two card configurations, but it is no longer named TriplePlay. I have updated the article to reflect this.


RE: 3 cards?
By z3R0C00L on 6/9/2006 12:16:59 PM , Rating: 2
ATi's x1K physics solution.

Single x1K (graphics and lower precision physics).

Crossfire x1K (1 card for Graphics the other for Physics, load balancing between graphics and physics for the second card, High precision Physics, both the same card).

Dual x1K (1 Card for graphics the other for Physics, can be different makes/models, x1600 or higher required).

Triple Play (2 Graphics cards in Crossfire for Graphics, 1 Graphics card for Physics, two graphics cards in Crossfire for graphics must be same model, single graphics cards for physics can me a different model, make but must be x1600 or higher).

This is ATi's stance on Physics. Another thing worth noting is that ATi will have several different API approaches. ATi's own API is being worked on WITH Microsoft (so nVIDIA will eventually have to adopt it) above that ATi is, like nVIDIA, supporting Havok's engine.

In other words.. ATi has a TRULY TRULY winning combination. It seems they've covered all there bases. Might I add that x1300 and x1600 are being released for PCI Express x4 slots? (HIS reference boards, of course x1600 on higher rule still applies) This means more current motherboards can already support Tripleplay (from a technical perspective).


RE: 3 cards?
By Strunf on 6/9/2006 1:39:33 PM , Rating: 2
A PCI-E 16x card can be used on a PCI-E 4x slot as long as the slot isnt "closed"...its very possible to motherboard makers to use PCI-E 16x slot even when not all the connections are connected, they did this very often at the beginning of the SLI using a PCI-E 16x slot when it only had the PCI-E 8x lanes connected, if you have a board with a PCI-E 4x you just need to cut a bit of plastic so the PCI-E 16x can fit in, and it should work (unless there's a need for a special driver but I doubt it).

Anyway I’m really looking forward for the ATI solution more so than the nVIDIA one that seems a lot more elitist…


Three cards
By Hare on 6/9/2006 3:22:03 PM , Rating: 2
Why is everyone moaning that you need three ATI cards? First of all. Ageia recommends SLI just so that you get decent FPS when there's a lot of extra junk flying on the screen. The situation is not any better. And who cares if its a gpu or a ageia ppu that's producing the physics. No one, except in Ati's case you could recycle an old GPU to be used as a physics processor. I see this as a benefit.

I however believe that this is just hype that will slowly go away since there are no games and the ppu costs too much. This product won't be a success. Besides with the trend of multicore cpus you can pretty easily assign more physics tasks to the other mostly idle core. Granted it's not as effecient as a designed physics ppu, but seriously. Who's going to pay for another 300$ card. Maybe 1/1000 gamers?

I believe pretty strongly that a separate ppu is going to be merged with gpus.




RE: Three cards
By z3R0C00L on 6/9/2006 3:46:45 PM , Rating: 2
Technically a PPU is a GPU.. in the ATi x1K sense.
DX10 level VPU's will be every bit as feature packed as AGEIA PPU (if not more). R520/580 VPU's are already on a level playing field with AGEIA's PPU.
You see, Microsoft is moving towards making VPU/GPU's into multipurpose processors (this is the idea behind DX10 and it's Unified shaders).


RE: Three cards
By mushi799 on 6/9/2006 7:41:49 PM , Rating: 2
People said the same thing when 3D games started. The originally Voodoo cards had a hard time starting out because they were no killer apps until the release the QuakeGlide and eventually Quake2.

We'll have to wait for AGEIA's killer app, it might be UT2007.


RE: Three cards
By Hare on 6/10/2006 8:38:27 AM , Rating: 2
I don't think the analogy is valid. Voodoo-cards were not that expensive and before voodoo-cards there were 3d-accelerated games. Now you need a SLI-setup to actually render the extra particles with decent fps. I'm not saying that PPUs will never be popular, I'm just saying that the they will likely merge with graphic cards and the cost/benefit in physics cards is currently way too small and the benefits interest only a niché market.


ATI Lied About Performance
By AggressorPrime on 6/9/2006 1:27:10 PM , Rating: 2
Just so everyone knows, ATI thinks their GPUs are 10x faster than Ageia's PPUs. This is a lie as I will prove with the following links:

ATI saying that their X1900XTX can do "five million sphere-to-sphere collisions per second" and that Ageia's PPU can only do "about half a million sphere-to-sphere collisions per second"
http://www.xbitlabs.com/news/multimedia/display/20...

BFG/Asus saying Ageia's PPU can do "Sphere-Sphere Collisions/sec.: 530 Million Max"
http://www.bfgtech.com/physx/index.htm
http://www.asus.com/news_show.aspx?id=3317

ATI's lie is 1060x off the truth.




By epsilonparadox on 6/9/2006 1:40:23 PM , Rating: 2
So you'd rather believe the manufacturers of the PPU card over the GPU maker? Until proper testing equipment and software are available to test these parameters, both of their numbers are useless.


RE: ATI Lied About Performance
By z3R0C00L on 6/9/2006 3:43:07 PM , Rating: 2
Easy response...
AGEIA is using theoritical numbers. ATi is using numbers from a realworld test.

As we know.. varying on the conditions/complexity of the tests.. results may vary.

I'll wait for tests.. but I'm more inclined to believe ATi over AGEIA.


Reminds me of...
By cochy on 6/9/2006 5:33:30 PM , Rating: 2
Glide vs. Direct3D

We all know who ended up ahead in that game. If ATI and Nvidia both support one API and Aegia is alone developing another...Doesn't look good for the little guy...Unless they come out with something special that BLOWS everything else away. But if their first product is any indication that won't happen. They better get moving cause the big guys are on the trail.




RE: Reminds me of...
By Iscabis on 6/10/2006 2:05:47 PM , Rating: 2
True...

Right now, the PPU degrades performance. Why would anyone in their right mind pay $300 to degrade performance?!

I was excited about the PPU, but after seeing the first benchmarks they have totally chattered all expectations for that product.

So much hype for such lackluster results.


RE: Reminds me of...
By Lakku on 6/10/2006 6:14:24 PM , Rating: 2
1) Before I get to Iscabis's post, I have ot say I agree with a couple things and first off, until the PPU gets to around 200 or below, or gets more then one killer title, it will remain unpopular.

2) IMO, a PCI-Express version would be beneficial, due to its bi-directional nature. This seems like a good idea when you need to update the world and move data constantly.

3) NO game out now was built from the GROUND UP to use the PPU. It was tacked on later. GRAW uses Havoc as its physics API, and I doubt they took the time to implement the PPU correctly. This is NOT a good test. Cell Factor is a poor test as well, simply because it is unpolished graphically and a lot of the performence issues are due to the visual complexity and not just physics complexity. It's poorly coded it looks like, so the new beta should be looked at to compare. Either way, someone needs to do an 'official' test to see the PPU and non-PPU performence on CF. People who have done so vary on their reports, but many claim there is a 10 to 15fps drop without a PPU. Point being, everyone is saying it degrades performence but it has not yet been properly utilized.

4) Without the PPU, CF has no cloth or fluid physics. This has the possibility to add even more immersion. No software solution has effectively done these, so some kind of hardware is going to be needed. GPU or PPU? Who knows, but I know I damn sure don't want to waste any GPU power on physics. ATi has a neat idea, but I will remain skeptical.

The PPU has known titles coming out to use it, so reserve judgement until then, because GRAW is NOT a good example, and if they really do have dozens of titles coming that use it, they will be a year or two ahead of everyone else and have many titles to their name. I don't own one and won't until I see more, or until a PCI-E version comes out.


A year from now, AMD 4x4
By jonobp1 on 6/9/2006 2:13:05 PM , Rating: 2
Right now I see this as the battle for standards. Ageia and the Havok approach, support coming from the people trying to make money. In a year DX10 may make some standards for physics in games just like everything else. The real winner will be the first company to make an affordable
co-processor to use on the upcomming AMD boards. Personally many I work with cannot wait to have a dedicated math coprocessor with dual or quad core opertons. I see video cards as something for graphics, but a $150 physics co-processor I think would sit very well with gamers. I'm even sure a great deal of use can be gotten from such a co-processor beyond just gaming. Besides, pci bus vs. hypertransport...much more life in the latter.




RE: A year from now, AMD 4x4
By peternelson on 6/9/2006 2:49:30 PM , Rating: 2

can't wait? why wait? buy it now. There are xilinx virtex4 accelerators to plug into socket 940 opteron boards. Only factor to put you off is the pricing $4500+ Could buy standalone xilinx cheaper than that.


By Trisped on 6/12/2006 2:42:49 PM , Rating: 2
quote:
Graphics processors are designed for graphics. Physics is an entirely different environment. Why would you sacrifice graphics performance for questionable physics? You’ll be hard pressed to find game developers who don’t want to use all the graphics power they can get, thus leaving very little for anything else in that chip.
“Boundless Gaming” is actually enabled by AGEIA’s Gaming Power Triangle in which the PhysX processor adds true physics to the mix instead of leaving it to a repurposed graphics processor.
AGEIA further says that developers are announcing more and more games that support its PhysX product, while no one is announcing support for ATI's method. Steele also mentioned that while he's glad that ATI has agreed that physics is important, ATI is delivering a "questionable" solution to physics processing.

Steele also emphasized that PhysX is available now while ATI's solution is not.

Yes, Graphics processors are designed for graphics, but they are basic processors and physics is a basic processes. Their hardware environment is so similar that it really isn’t worth the time worrying about the distinctions at this point in the game.
If you look at the GPU usage charts per frame there is a large amount of unused processor time due to the fact that it must have enough power to render complex scenes in about the same amount of time as a simple one and the fact that some problems require the results of the previous computations, which means you can pipeline in a few different jobs at the same time without losing performance. Besides, having an extra graphics card just for physics means lots of power just sitting there.

As for “Boundless Gaming” I wouldn’t call anything that reduces games to unplayable frame rates “Boundless.” Yes, game makers are going to push the graphics as far as they can, but they also have to make them run on old computers, computers without physic processors, and 30inch monitors. For that reason you can manually adjust things like AA, AF, texture sizes, field of view, and resolution. Some will max their visual display while others will max their physics and limit the visuals.

As far as availability, sure ATI’s isn’t out yet and no games are said to support it, but there are not games that support the PhysX at a playable frame rate.

They are just talking out of their but because they realize how crappy their product is and how many people think/want/hope ATI’s will be better. I really wanted Ageia to do well, but the facts just don’t line up.




By crazymike on 6/22/2006 3:56:37 PM , Rating: 2
At this moment it is clear that 300$ are a waste of money for a PPU and even a drop of 100-150$ dollars would not do much to change things specialy when a X1600 costs less tham $150.

Even in future I dont see this changing specily with quad core cpu's.

This triple play offer from ATI did make me think in one thing why not adding the possibility of Crossfire with different GPU's ? That way we could add a 1600XT to a 1800XT to get extra perfomance because 1800xt would probably be overkill for most of the people.


number of wheels
By Strunf on 6/9/2006 6:33:28 AM , Rating: 2
The analogy with the number of wheels is rather dumb since all cars have 4 wheels (or most of the times they do), IMO comparing it with the horse power of a car would be more telling, but then people would think that its true more horses doesn’t imply being faster but very often it is the case.




By Strunf on 6/9/2006 6:39:01 AM , Rating: 2
Think about it gamers change very often of graphics card, with ATI we just got a new reason to keep our old card, this means your actually getting a physics processor with out paying for it, unless you start a new system from scratch but then you can find many cards for far cheaper than the Ageia PPU.

And AMD also speaks of adding a PPU to their new architecture…




By TheDoc9 on 6/9/2006 10:39:14 AM , Rating: 2
Why on earth would I want to buy a GRAPHICS card that I can't use for graphics?

Not to mention the hidden cost of the motherboard with at least 3 pci-e slots, and a chipset that supports it.

Why buy a low grade x1k, why not get 3 top of the line x1ks.

I know I won't be buying 3 ati cards, and the only way i'll buy ageia is if the price drops to $100. Now if Ageia manages to get their physics chip integrated onto an ATI or Nvidia card...





By peternelson on 6/9/2006 2:43:59 PM , Rating: 2
An appropriate response from Ageia would be to annouce:

a) a PCI EXPRESS version of their product *NOW*

b) the possible ability to use a pair of Ageia boards in tandom to scale physics performance further.

c) Announce the ability to use their physx ppu to accelerate other things like math librarys and serious simulation applications.

To respond with just TALK dismissing the competition is not that impressive.

Fortunately, for them, ATI do *NOT* own the graphics market totally, so their competition is divided, although NVIDIA are also leading the way with non-graphics acceleration on their gpus.





Do we really need a PPU?
By Le Québécois on 6/10/2006 3:39:21 PM , Rating: 2
I mean, I know that I some point we might need them but look at a game like FEAR for exemple. I "think" all the physics and AI in CPU bound in this game and yet there's near to none improvement in fps (like 5% between AMD64 3000 and an FX-60) increase whatever CPU you are using for this game and I'm not talking about crazy 1600 resolution but even at 640 where the cpu is supposed to be a limiting factor.

Either Monolith has done a pretty impressive job with the havok engine and their AI or I missed something along the way that make that even a SLI-7800GTX 512 is a limiting factor in 640x480.

My point is that to me all that PPU talk is about programmers getting lazy and not trying to optimize their code to the max so the we can USE our current CPU to their maximum potentiel.




"Nowadays you can buy a CPU cheaper than the CPU fan." -- Unnamed AMD executive

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki