backtop


Print 107 comment(s) - last by Wwhat.. on Mar 30 at 5:25 AM

AGEIA's new PhysX cards will be available starting in May

AGEIA is entering new territory with its new Physics Processing Unit (PPU). The company is hoping to do for the world of phyics what 3dfx did for 3D graphics 10 years ago. The new add-in board will be available from leading board manufacturers including ASUS and BFG beginning in May of 2006. AGEIA is also touting design wins from the leading enthusiast-level PC manufacturers in the industry.  The company claims:

AGEIA is charting new territory by bringing dedicated physics hardware to market that delivers the real-time physics gameplay that gamers and developers alike have been clamoring for,” said Manju Hegde, CEO at AGEIA. “With the PhysX accelerator board in these new PCs from Dell, Alienware and Falcon Northwest, gamers now have future-proof systems for a fast-growing library of great games that exploit their power.

The initial cards will be available in PCI format with 128MB of onboard GDDR3 memory. Hopefully, PCIe versions are earmarked for the future seeing as how PCI is on its way out.

AGEIA is confident that its PhysX processor will enhance the user experience in the following situations:

  • Explosions with dust, debris and shrapnel that cause collateral damage
  • Characters with joints, convexes and other complex geometry that enable realistic motion
  • Spectacular new weapons with unpredictable effects
  • Lush foliage that bends and sways when brushed against by the player or other characters or objects
  • Dense smoke and fog that ooze naturally around moving objects
  • Fluids that ebb and flow, drip, or spray naturally with physical characteristics dependent on their viscosity
  • Cloth that drapes, flows, tears and billows depending on where it is placed and the environment

This latest announcement means that AGEIA will be competing head to head with NVIDIA and Havok in the realm of physics processing. AGEIA is going with a dedicated card while NVIDIA and Havok are offloading some of the physics calculations to a single GPU in an SLI configuration. Each comes with its own set of advantages and disadvantages so it will be interesting to see which solution (if any) becomes the norm for future gaming systems.

In a recent announcement from AGEIA, the company claimed NCSoft will be one of the largest supporters of PhysX, with support for the engine in City of Villians and other upcoming titles.

For more information on AGEIA's new PhysX processors, you can read previews from Hot Hardware and PC Perspective.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

chicken/egg
By nrb on 3/23/2006 6:41:23 AM , Rating: 2
I think this device may suffer from the same kind of chicken/egg problem that prevented 3D graphics from taking off - only worse.

The effects one can produce using a physics processor are very impressive - but they're so impressive that no computer which doesn't have a physics processor could get anywhere near rendering them in software.

This creates a problem: if developers make games that will run on a non-boosted PC, there will be no benefit to be gained on those games by adding a physics processor, so no one will buy one; but if they write games that take full advantage of a physics processor, the games won't run on anything else - and, because no one has one yet, no one will buy the game. Thus, developers will be reluctant to make games which actually take advantage of the physics chip.

The situation with 3D graphics was a little different, in that it was (comparatively) easy to make "hybrid" games, i.e. those that could run in software but ran better with 3D hardware. If the software version ran at 320x240 resolution with no filtering, mip-mapping, and no perspective correction on textures, it was still quite playable; and there was an immediate benefit to upgrading to a 3D card, because now you could run at 640x480 with bilinear filtering, mip-mapping, and perspective correction.

But having two different versions of the same game with entirely different physics behaviours is going to be much trickier to do well.




RE: chicken/egg
By ComatoseDelirium on 3/23/2006 7:15:44 AM , Rating: 2
Doubt it, every one who is willing to buy SLI(especially with the BS sli-ppu idea nvidia came up with) will get this for 300$ and it will be considered an enthuist product. All games have physics, it will likely be implemented over the entire game, and have some kind of low, medium, high, and ultra high settings, with 4 kinds of low, middle, high, and unneccessarily high card variants. And I'm sure with the backing of both ATI, and Nvidia some kind of ppu will be here by the end of the year, although a ppu video card hybrid seems like a good idea, making it for pci shows they have thought about what steps they need to take to make it mainstream. What we will need: PPU benchmarking software, for our e-penises.


RE: chicken/egg
By nrb on 3/23/2006 8:23:52 AM , Rating: 2
quote:
Doubt it, every one who is willing to buy SLI(especially with the BS sli-ppu idea nvidia came up with) will get this for 300$ and it will be considered an enthuist product.
Even if no software actually uses it?

quote:
All games have physics, it will likely be implemented over the entire game, and have some kind of low, medium, high, and ultra high settings, with 4 kinds of low, middle, high, and unneccessarily high card variants.
That was my point, though: how could the same game usefully have "high physics" and "low physics"? What sort of in-game behaviour would change as the result of the physics calculations being done differently? Try and imagine a game like a FPS: will the way the weapons work (e.g shrapnell damage) actually change depending on which Physics system you're using? Will fire and water behave differently? Will people and vehicles move and fall differently? Four different physics systems effectively means 4 different games with 4 different sets of game rules. How can the developers ensure that the gameplay is balanced equally well between all 4 of them without needing to spend 4 times as long doing it?


RE: chicken/egg
By masher2 (blog) on 3/23/2006 8:37:35 AM , Rating: 2
It's an excellent point, and given more weight when you consider interactive physics in online multiplayer games. If one person has an Ageia card and the other doesn't, how can gameplay be equaliezed between them, if their two equal actions results in two different outcomes?



RE: chicken/egg
By nrb on 3/23/2006 11:00:32 AM , Rating: 2
quote:
when you consider interactive physics in online multiplayer games. If one person has an Ageia card and the other doesn't, how can gameplay be equaliezed between them, if their two equal actions results in two different outcomes?
That particular problem is fairly easy to solve, actually: you have a single computer that acts as the game server and models the state of the game world. The players' PCs tell the server what controls each player is pressing (thus causing the state of the game world in the server to be updated) and also do the job of converting the server's state into graphics and sound.

Thus, all calculations pertaining to the state of the game world (including physics) is done in just one place: on the server. That would either have a physics chip or it wouldn't.

However, that does mean that players who do have physics chips on their PCs wouldn't be able to use the advanced physics system in a multiplayer game unless the server did too. Similarly, players used to playing the game without the physics chip would have to learn a whole new way of playing to play online on a physics-enabled server.


RE: chicken/egg
By masher2 (blog) on 3/23/2006 11:49:16 AM , Rating: 2
That solution removes the impetus for buying the chip entirely, however. If the server controls the physics entirely, then why would clients (players) every buy one? It would never be used.

If you move to a model where one client can act as a server, at least for calculating ingame physics, you can solve the problem...but that would require a fairly extensive bit of recoding for most games.


RE: chicken/egg
By lemonadesoda on 3/23/2006 5:22:52 PM , Rating: 2
Not really, why?

>> Most hardcore games want the extra FPS at a cost of candy. Hence, they turn off "maximum effects".

With this solution, the performace penalty of the candy could be very small.

(This answer assumes we are talking candy, and not change-of-world-environment effects).



RE: chicken/egg
By fishbits on 3/23/2006 1:11:36 PM , Rating: 2
The behavior doesn't have to change in order for the card to provide a benefit. Your CPU can handle both graphics and audio processing, but offloading them to dedicated units doesn't wreck the outcomes in an FPS. On the system with them, the graphics/audio are nicer and/or the framerate is higher.

So some ways to use the card in FPS would be higher-poly environments, better looking weapons effects, more shrapnel and giblets flying around more realistically, better quality cloth/ hair/ liquid/ smoke animations etc without changing gameplay. Guy on the other end sees "standard" looking (or no) giblets, weapon effects, etc. When folks see the video quality improvement from much better GPUs, they want this. When folks see the video/environmental quality improvements a well-utilized PPU can provide, they'll want it too. And soon enough they'll be adopted enough that we'd look at it the same way we would if someone decided not to get a sound card or 3d accelerator. "Your loss homey." It'll be part of the standard.


RE: chicken/egg
By masher2 (blog) on 3/23/2006 1:14:15 PM , Rating: 2
You're missing the point entirely. Obviously using a PPU to enable more eye-candy is trivial. The issue is using it to enable interaction, with gameplay-affecting consequences.


RE: chicken/egg
By fishbits on 3/23/2006 1:35:21 PM , Rating: 2
The original post implies that because of the physics implementation difficulty that the card won't sell/ won't be supported by developers/ won't sell... etc. While I agree that it will be something to overcome, the card can sell like hotcakes even before then because of the boost it will give to the candy portion of gaming. After that, soon(ish) developers can count on gamers having dedicated physics support, the same way they count on them having sound and graphics acceleration today. Thus the mentioned barrier will not be insurmountable to adoption.


RE: chicken/egg
By kilkennycat on 3/23/2006 9:39:28 PM , Rating: 2
Agreed... Bethesda's Oblivion with Havok software physics does just fine on my dual-core system. The physics is very impressive. Quite sufficiently impressive that the thought of dedicated hardware for marginally more physics-realism drops out of the bottom of my PC shopping-list. The Havok cooperation with nVidia is very interesting, so that as a game-player I could trade off physics against enhanced graphics using the extra video card -- no doubt an addition
to the configuration options in any supported game.


RE: chicken/egg
By Clauzii on 3/23/2006 9:58:45 PM , Rating: 2
Dedicated physics will be faster...


What's the Benefit?
By TomZ on 3/22/2006 9:02:36 PM , Rating: 2
Pardon my ignorance, but what's the benefit? Can't the main CPU do these calculations? With the intense video rendering offloaded to the the GPU, the main CPU is left with doing...?




RE: What's the Benefit?
By Clauzii on 3/22/2006 9:08:46 PM , Rating: 2
The Ageia can be more than 100 times (maybe more) faster than current CPUs out there. Thats why :)


RE: What's the Benefit?
By masher2 (blog) on 3/22/2006 9:27:26 PM , Rating: 2
100X is a bit of a stretch. 30X the power is a more reasonable estimate.

With graphics offloaded to a gpu and physics to a ppu, there might actually be a ghost of a chance of getting decent AI in upcoming games.


RE: What's the Benefit?
By Clauzii on 3/22/2006 9:32:44 PM , Rating: 2
Sure? I think I read over 100 somewhere on Ageias site?


RE: What's the Benefit?
By masher2 (blog) on 3/22/2006 10:26:54 PM , Rating: 2
I wouldn't doubt it. It might even be true for certain specific tasks. But in general, they're claiming realtime dynamics for 32K simultaneous objects, which is about 30X more than what a fast Intel or AMD CPU can do today.


RE: What's the Benefit?
By Clauzii on 3/22/2006 10:31:03 PM , Rating: 2
Okay - pretty awesome anyway..


RE: What's the Benefit?
By MykC on 3/24/2006 12:56:27 PM , Rating: 2
When do you think we will see cards dedicated to AI?


RE: What's the Benefit?
By masher2 (blog) on 3/24/2006 2:51:42 PM , Rating: 2
Never. But I think we'll see cores dedicated to AI within 10 years. About the time high-end CPUS will be available with 60-100 cores on them.


RE: What's the Benefit?
By DangerIsGo on 3/22/2006 9:19:25 PM , Rating: 1
It allows the CPU to not handle these complicated tasks and hopefully allows better FPS and frees up the CPU to do other tasks. I will get one when they come out and when games take advantage of it...until then, im saving my money for SLI :p Id rather have SLI for gfx and a PPU for physics, not spending another $300-$500 for a physics card and waste a PCIe slot.


RE: What's the Benefit?
By Clauzii on 3/22/2006 9:26:34 PM , Rating: 3
On Ageias website, there is a list of current and comming games:

Available From Now Throughout Spring 2006
Tom Clancy's Ghost Recon Advanced Warfighter
Rise of Nations: Rise of Legends
Bet on Soldier: Blood Sport
CellFactor
City of Villains
Gunship Apocalypse

Coming Soon
Unreal Tournament 2007
Sacred II
Loki
Dogtag
Fallen Earth
Crazy Machines 2
Arena Online
Infernal
Warhammer MMORPG
Eye of the Storm
KARMA
Vanguard: Saga of Heroes
Alpha Prime


dissapointed
By firewolfsm on 3/22/2006 10:14:08 PM , Rating: 2
The flame thrower in bet on soldier actually looks pretty fake




RE: dissapointed
By Clauzii on 3/22/2006 10:16:04 PM , Rating: 2
That´s probably the game programmers fault - not Ageia..


Nice
By RussianSensation on 3/22/2006 10:35:45 PM , Rating: 1
I think this is a step in the right direction. I would pay for a physics processor as long as games physics improved dramatically over a dual-core cpu which adds absolutely nothing in gaming (except for those that perform serious background tasks while gaming). Of course it would be nice to have faster performance in office tasks and when multitasking, but when dual core can't bring any benefit in games even when multitasking (it can at best match single core without multitasking), the benefit of this starts to seem a lot more interesting since it can actually change the way we play games.


RE: Nice
By Clauzii on 3/22/2006 10:44:58 PM , Rating: 2
I also think that the 5-10 GFlops an xtra core would provide is not comparable to what a dedicated super-threaded processor can come up with. Not even quadcore could do it :)


RE: Nice
By Clauzii on 3/22/2006 10:49:41 PM , Rating: 3
Btw. it also looks like it would be possible to use it as a general FPU??

I mean - then there will be no need for increasing that in a CPU. That space could be used for more cache or other units the CPU would benefit from.


RE: Nice
By Furen on 3/22/2006 11:11:55 PM , Rating: 2
I think the PCI interface would be too much of a bottleneck to use this as an FPU.


RE: Nice
By Clauzii on 3/22/2006 11:44:06 PM , Rating: 2
True, to a certain degree:

If the Ageia 128MB is useable as general storage for calculation, I can see a lot of PCs on SETI, and all kinds of cluster calculations. I could almost see the Ageia Super-Network evolve out of that.


RE: Nice
By Clauzii on 3/22/2006 11:53:03 PM , Rating: 2
And for offline rendering in 3D animations off course.

3DStudioMAX on steroids :)


RE: Nice
By Clauzii on 3/23/2006 12:15:41 AM , Rating: 2
And: The gamedemos look pretty ok - so it looks like it works in reality...


RE: Nice
By goku on 3/23/2006 9:11:12 AM , Rating: 2
Doubt it, it's not like they're dealing with textures here, they're dealing with physics calculations, anybody heard of seti or Folding@home? Those like physics calculations (Protein folding is similar to physics calculations) are high processing low bandwidth operations, operations so small they fit in the level 2 and 1 cache but take a lot of computing power+time. I'm not worried about the PCI bus being a bottleneck for this kind of stuff because it only needs high computational power, it's not like a video card working with graphics.

Even with video cards, it still worked out, you could get by with a PCI video card about all the way up to 2001 where it became a real bottleneck. They're not dealing with textures, they're dealing with small size high power calculations that if anything need a low latency bus, and not a lot of bandwidth... I wouldn't be surprised if they could get by with using the ISA bus...


my cpu has enough power thanks
By inthell on 3/23/2006 8:45:06 AM , Rating: 2
my oced AMD 3500 has enough power already i dont this ageia card, or do i? seriously, i need to see benchmarks before i ever even think of buying this thing





By armagedon on 3/23/2006 9:01:27 AM , Rating: 2
With all the illogical reasonning behind the PC market; "if you build it, they will buy it". Just because you can show it off in your sig.


By masher2 (blog) on 3/23/2006 9:17:43 AM , Rating: 2
Lol, what do you expect to benchmark? This card isn't there to improve frame rates, its to improve game realism, interaction, and mechanics.

The whole quest for increased resolution and frame rates in games has gotten quite silly. Your average Hollywood movie running at 720x480 @ 24 FPS looks a hell of a lot more realistic than the best games. Why? In large part, its due to the physics and interactions you see, even if you don't realize it.

But in the quest for higher pixel counts and frame rates, we've forgotten that. A game at 1600x1200x8AA 90FPS is razor-sharp and silky-smooth. But it's still a joke compared to an average video feed.


RE: my cpu has enough power thanks
By goku on 3/23/2006 9:18:06 AM , Rating: 2
You could have a 10GHZ AMD 64 and this card will still beat it at physics processing, it's highly specialized, just like a 3D video card, software rendering just doesn't cut it for video and neither does it for physics processing as a computer processor is just too general, not optimized at all...
What I'm really anxious for is for these developers to start adding patches to current games like farcry so that I can take advantage of the physics processing power of the card.
I wouldn't be surprised if the system requirements for farcry dropped down to 600MHZ long as you've got a physics card and they make a patch for this card..


Standards
By Cincybeck on 3/23/2006 9:47:15 AM , Rating: 1
Kudos to Ageia for putting this out there, however you look at it, It's a step forward in kin to what 3DFX did for GFX. Also have to realise this is the first generation here, an idea more so then a practical solution. There needs to be a standard though, such as an Physics API (OpenPX anyone? =D)that takes advantage of what hardware you do have whether that be extra power from your gfx card, an idle second core, or a PPU.


RE: Standards
By Cincybeck on 3/23/2006 9:58:20 AM , Rating: 1
I also wanted to add. The point of how the scaling is going to work is a very interesting one. For those who don't understand take this for example. The game developer makes use of the physics card by allowing every door, wall, and barrier of any sort be destroyed. So in this situation you can use an explosive to put a hole in any wall, anywhere on the wall, not just preassigned locations. What happens when you don't have a PPU and the power to process how the brick, wood, glass, or whatever will react to some C4 going off next to it?


RE: Standards
By masher2 (blog) on 3/23/2006 10:36:45 AM , Rating: 2
> " What happens when you don't have a PPU and the power to process how the brick, wood, glass, or whatever will react to some C4 going off next to it?"

Then you get the preprogrammed hole. Which leads to some interesting questions about how this whole scheme would work in a multiplayer game. Player A has a PPU and Player B doesn't...so they'd generate different results depending on which one actually took the action?

Sounds messy...


RE: Standards
By Cincybeck on 3/23/2006 11:00:51 AM , Rating: 1
Thats what I was trying to get at thanks for adding on =D


RE: Standards
By MMilitia on 3/23/2006 11:41:12 AM , Rating: 2
Well I think what we'll find is that the games will work the same way i.e. you will still be able to make a hole in a wall, the difference is if you have a PPU a load more debeis will fall out of said hole.
When it comes to large scale buildings falling down and whatnot there will probbly be a 'safety net' for non PPU systems where only a certain number of objects will have accurate physics moddeling and the rest will follow the tangent of one of these objects insead of having one calculated for them.

Anyway, I hope to build a whole new high end system soon and i'm seriously considering getting one of these cards. Although support will be flakey at first I will enjoy informing my geek friends that i have high end audio, video and physics hardware.


RE: Standards
By masher2 (blog) on 3/23/2006 11:46:32 AM , Rating: 2
> " the difference is if you have a PPU a load more debeis will fall out of said hole. "

That falls into the "eye candy" category, not the interaction category. To enable interaction, you HAVE to influence gameplay. By definition.


RE: Standards
By Wwhat on 3/23/2006 11:14:33 AM , Rating: 2
ageia actually says that their api is designed to use the cpu instead if there's no ppu card, and you'll get less physics, so that's thought of.
It does not try physics on the gfx card though.


At last!
By Clauzii on 3/22/2006 8:56:46 PM , Rating: 2
I think a lot of (silent) people are looking forward to get this, including myself sometime in summer. Physics has finally come to life :)

As ATI is already up to 48 shaders with 600+ MHz, their next major release (when they change to 65nm process) might be 64 or 96 (like Ageia). Because it´s going to be unified shaders it means that one (with 64) could choose to use 32 for pixels and 32 for physics. Maybe even scalable in the driver.

Allthough nVidia ain´t bad graphics cards either, they have to think of more pixelshaders in their GPUs soon, as both ATI and nVidia benefit from the Ageia solution.
I don´t count the other GPU makers in since they are not playing the major parts of the market here.

I should also think that to really use all the power in an Ageia, it´s probably best to have one of the newer ones, with a good fillrate (might be depending on gametype), since calculating a lot of extra "waterparticles", debris, exploding pieces etc. still has to be drawn on the screen/s.




RE: At last!
By IvanAndreevich on 3/22/06, Rating: 0
RE: At last!
By Clauzii on 3/22/2006 9:12:44 PM , Rating: 2
What do You mean?


RE: At last!
By Clauzii on 3/22/2006 9:14:30 PM , Rating: 1
I am aware btw. that the Ageia Is not a shadercard :)


RE: At last!
By oTAL on 3/22/2006 10:58:43 PM , Rating: 2
It's an add-in card.... it's not for graphics but for physics, so while it is not a competitor (well... for now... not really...) it is more of a complementary board... The CPU will do general calculations, GPU will do graphics and PPU (ageia) will do physics... Get it? It's a whole new card....


RE: At last!
By Clauzii on 3/22/2006 11:04:20 PM , Rating: 2
I Know - so your point was?


RE: At last!
By Clauzii on 3/22/2006 11:08:00 PM , Rating: 2
First: EDIT FUNCTION :)

Second: Sorry for making a bit dizzy firstpost, but I was trying to make a comparason with the current GPUs out there...


RE: At last!
By Clauzii on 3/22/2006 10:02:29 PM , Rating: 2
"What’s more ATI is taking a broader approach to gaming physics by opening up the hardware to developers just like they promised they would during the R520 launch. They are working on a Data Parallel Processing Architecture Abstraction layer that will allow developers to utilizing the hardware without having to go through Direct3D or OpenGL APIs. This software is being given away for free to developers and API developers to use as they see fit. And while physics is the focus for now, this architecture abstraction will be open to all other types of GPGPU work too."

..


Watch these demos...
By shabby on 3/22/2006 9:47:10 PM , Rating: 2
http://physx.ageia.com/footage.htm l

The graw and cellfactor demos look pretty cool.




RE: Watch these demos...
By Clauzii on 3/22/2006 10:07:24 PM , Rating: 2
And the price is MSRP 299$!!


RE: Watch these demos...
By Wwhat on 3/22/2006 10:48:30 PM , Rating: 2
I hate to be a cynic but ageia is claiming lots of impressivesounding enhancements yet what do we see on the demo screens? mostly just more boxes standing around and flying off.
And frankly just seeing the demo 2 times makes you bored with it, I know it's the developers that can't think of anything better and not the fault of ageia perse but perhaps they should add some fancier examples to the SDK for them?
I do like the one from ghost recon though, although I have the feeling that some of these effects could have been done the old way too, I've seen games with fancy explosions that threw debris aplenty.
As for the game with the flamethrower, well that's also nothing new and been seen for years now on hardware without a PPU.




RE: Watch these demos...
By Clauzii on 3/22/2006 10:57:40 PM , Rating: 2
The games I´d like to see would be: Waterskiing, Formula1 with xtreme damage control, MotorStorm, DeepSea games in general (so we can get betterlooking Aqua-something), MotorStorm, AirDuels with foggy skies and BigCity bombardment (not that I like the bombardment thing, but physics will be cool for 500 bombshells hitting ground), MotoStorm, i could go on, but will not :)

Btw. did I mention MotorStorm??? :) That game (IGN, PS3, prerendered?) will be FAT. IF it can look like at IGN :)



RE: Watch these demos...
By Wwhat on 3/23/2006 11:01:14 AM , Rating: 2
OK you got me excited again :]
I only hope that what sounds good on paper (liquids etc.) will look as good as we imagine.

And I hope that someone manages to delve deeper into the secrets of the card so it can be used for other calculations too, say raytracing or videoencoding or something, although it's not optimised for that of course it might still be able to improve speed and add value to the card.


RE: Watch these demos...
By Clauzii on 3/23/2006 9:27:31 PM , Rating: 2
And even though I might be a PCI version byer at first, the PCIe version will give a much higher bandwidth. I really think the card is going to be a killer... :)


No more than one version a year
By TecHNooB on 3/22/2006 11:11:09 PM , Rating: 3
As long as there's not a bajillion versions of this card, I'm all for it. I don't want to contemplate on a PPU.




RE: No more than one version a year
By Furen on 3/23/2006 12:00:43 AM , Rating: 2
I'm more interested in the price. The card only has 128MB of GDDR3, so it shouldn't be much of a problem to keep these at or under $150. Ageia needs to have support for these added to the Havok engine in order to make the vast majority of games out there support its physics engines. The problem, as I see it, is that Ageia probably wont have any competition so if these things take off we may get royally screwed on price drops, etc.


RE: No more than one version a year
By Clauzii on 3/23/2006 12:03:53 AM , Rating: 2
With the calculation power at would 299$ surprise You? I can´t find a 150GHz CPU at the moment for even 10 times that... :)


RE: No more than one version a year
By Furen on 3/23/2006 12:56:19 AM , Rating: 3
The "Calculation power" itself is impressive, but I'm more interested in the actual benefits we'll get from it. I know that accurate physics require a high degree of precision, but in-game physics have been becoming more accurate as time passes because CPUs themselves are become more powerful. I'd rather stick to the incremental improvements we've been seeing rather than have to buy yet another 300 dollar part.


By AndreasM on 3/23/2006 8:46:30 AM , Rating: 2
Well you still have to buy new, faster CPUs if you want them to become more powerful. I don't know about you, but I would rather buy a 300$ card and get a huge boost in physics processing capability, than spend years and hundreds of dollars on CPUs for incremental improvements. Even though only a handful of good games support it, it would still be worth it. Ageia's SDK ending up like Glide doesn't matter either, writing support for DirectPhysics in their drivers shouldn't be an issue.

http://members.microsoft.com/caree rs/search/detail...


It looks cool but...
By segagenesis on 3/23/2006 10:32:51 AM , Rating: 3
Not another $300 addition add-in card cool just for flying objects in a game. I'll pass.




RE: It looks cool but...
By Cincybeck on 3/23/2006 10:53:08 AM , Rating: 1
Excuse me everyone, for posting so much, but I'm bored as hell. It's not just flying objects, it's realistic physics. Basicly how physical forces interact with phyiscal substances. Whee I'm having fun with this... Need more coffee! Maybe I should double major in physics and computer science and become a virtual physics programmer. =D


RE: It looks cool but...
By nrb on 3/23/2006 11:05:42 AM , Rating: 2
Some obvious applications if you can do complex physics calculations:

- Accurately modelling the way an unconscious body falls down a flight of stairs.

- Lifelike simulation of cloth, hair, etc.

- Authentic simulation of liquids (water, lava) and also vapours (such as smoke or steam). Authentic ripples and breaking waves.

- Accurately calculated heat-haze and turbulence.

- Realistic ricochets of bullets off hard services.

- Convincing modelling of objects breaking into pieces, such as a demolished wall .

- Objects rolling, bouncing and sliding.


Your one sale won't hurt
By littlebitstrouds on 3/23/2006 11:05:50 AM , Rating: 2
Read up... prolly gonna be in the $100-150 range. While you might still pass I bet most gamers will be getting these.

On a side note, while I want this PPU to be kick ass... I don't think I remember a more hyped product for gaming in a long time... If this thing bombs look out, people are going to be piiiised.


RE: Your one sale won't hurt
By segagenesis on 3/23/2006 12:02:04 PM , Rating: 2
If this thing bombs look out, people are going to be piiiised.

Yeah this reeks of the old hype around the Kyro tile based rendering cards. Remember those? Or the Bitboys "revolutionary" video card that never materialized. Granted, this one is real this time but still... being first in new technology almost always burns you.

I still dislike the fact we have yet another component to add to a PC to make it "better" compared to just spending a one time sum on a console. Segregation between the haves and have-nots yet again?


RE: Your one sale won't hurt
By masher2 (blog) on 3/23/2006 12:05:37 PM , Rating: 2
> "Segregation between the haves and have-nots yet again? "

That usually only bothers the have-nots.

Personally, I think its a silly objection. People are already spending $1000 or more on SLI solutions that provide only tiny incremental benefits to certain games. This has the potential to provide far more impact at a significantly lower cost.

What's wrong with that?


By kilkennycat on 3/23/2006 2:48:22 AM , Rating: 2
SLI motherboard, dual 7800GTX (or GT), X-Fi audio.
Or
Crossfire dual X1900XT(X), X-Fi audio.

Tell me exactly which slot this card occupies in my desktop PC without totally destroying the ventilation of one of my video cards -- in the case of the nVidia products ? The PCI Ageia board literally won't fit at all with the above Crossfire combo (dual-slot-width) ATi products if a PCI audio card is also present.

Only those with money to spend on the very high-end PC toys will be attracted to this product, so which current high- end functionality are they prepared to trade-off to free a slot for this card --- one of the SLI/Crossfire video cards, or the high-end audio-card ?

Catch 22. Doomed product.




By R3MF on 3/23/2006 3:45:43 AM , Rating: 1
with the arrival of first X-fi (APU), then Ageia (PPU), followed by Unified Shader graphics (GPU), along with multicore Processors (CPU), we really entering a golden age of computing akin to a 21st century version of the amiga revolution.

dedicated hardware rocks.

when i can get a mATX M/B to fit in a Silverstone SUGO-E case with:
16x unified shader GPU
4x X-fi APU
4x Ageia PPU

then i will be a happy man!


By FoxFour on 3/23/2006 11:05:25 AM , Rating: 2
Heh. I have this problem with a SINGLE 7900GTX (not here yet, but it should be by the time I could buy one of these AGEIA cards) and a SB X-Fi on an Asus A8N32-SLI Deluxe motherboard.

Either I put the graphics card in the lowest SLI X16 slot and stack the two PCI cards side by directly on top of it, or I put the graphics card in the top SLI slot, one PCI card at the bottom of the board, and the other squashed right up against the intake of the vid card.

Nasty.


And as you said, with SLI it's basically impossible. I think SLI users are pretty much FORCED to use the NVidia solution if they want to offload physics processing.

Forget price, forget performance, forget everything else. If it plain won't fit, you can't use it.



By lemonadesoda on 3/23/2006 5:18:18 PM , Rating: 2
Disagree.

You are basing your assumption on a number of factors that may well change:

1./ That it is NECESSARY to have an SLI/Crossfire combo as TWO separate cards

>> This will change. The reason you are buying TWO cards today is because the margins are bigger for the GPU and card manufacturers. With a card redesign, you can stick a SLI on ONE CARD

2./ That a high end GPU requires 2 slots each.

>> Lower volts, more efficient silicon design, lower heat, or other thermal methods for cooling (think out of the box, literally) means one slot solution and plenty of space for other boards

3./ That in the future, to get good performance, you NEED TWO GPU's.

>> Why so? A single P4 is much faster than a dual P2 right? All we need is an order-of-magnitude increase in performance, like 3-10x on a single card, and SLI/Crossfire is a dead duck

4./ That there IS A TRADE-OFF of ageia over a second GPU.

>> Until we see this thing in action, this may be a mistaken assuption. It could be an absolute no brainer.



By kilkennycat on 3/23/2006 9:31:11 PM , Rating: 2
Er, counted the number of PCI slots on a typical PCIe motherboard recently ? Either 3 or 2, at least one of which is unusable in a dual-video-card configuration.

Anyway, you are hypothesising for the future. By the time the future comes, the PCI version of the Ageia will be gathering dust. And if Ageia does not bring out a PCIe version post-haste with 1/2 of the power consumption, nVidia and/or ATi video cards operating as dedicated physics processors (with Havok's cooperation) in SLI/Crossfire/Intel 875 motherboards will have gobbled up Ageia's potential market.


By Clauzii on 3/23/2006 9:50:21 PM , Rating: 2
Half the power consumption? What are You talking about - it uses ~20watt in the PCI version - hence the xtra power connector. Might not be a must for the PCIe version, as the PCIe bus can deliver more power :)


To each their own of course...
By hwhacker on 3/23/2006 11:23:52 AM , Rating: 2
[quote]
> "$250-300 for a 128mb pci (not -e) card. It may be impressive, but that's massively inflated...even though it's 130nm"

I'm sorry, but this is really silly. You don't judge silicon by these factors, you judge by its performance. [/quote]

I think most do actually, as a larger process requires unnecessary cost to everyone on top of all the other detrimental effects a large die over a smaller one...Especially when I think most agree this thing ain't worth it's price for what it does, but could be at 90nm (or smaller) and perhaps $150 or less, which is very possible for it to transition to at only 125M transistors...Just seems like it was held finished in the oven too long waiting for software support. This is not to say improvements won't be made in the future as/if it takes off.

Products like this could compete with add-on's, co-processors (as AMD may do), or additions to NV/ATi's dies for physics calculations and a corresponding API (like nvidia has done with Havok)...Of course the key is finding an API that is universally accepted and engineered/designed around, or getting developers to develope for them all. I don't know about you, but I hate having multiple standards...and I figure developers will agree and eventually code for what most people have. That's why I think in it's current form it seems weak. It's cool, but not worth it when we'll have other products that we already use incorporating the same thing eventually...hopefully they'll be incorporated better, accepted, and designed for using their API's. If not, I hope AGEIA's next product is a little more polished and knows it's market.

Like I alluded to with my flame suit, I don't expect it to be a popular opinion as this stuff looks cool, but at the moment it could be done better, although granted i'm not an engineer. Personally, I'll wait for the near future from Nvidia, ATi, and AMD and/or to see if this thing is accepted in it's current form or it's killed by competition. If it survives as a stand-alone product, I hope to see it in a more polished state.




By masher2 (blog) on 3/23/2006 11:44:43 AM , Rating: 2
> "as a larger process requires unnecessary cost to everyone on top of all the other detrimental effects "

Err, no it doesn't work like this at all. A smaller process has a vastly MORE expensive fixed cost. That cost is only recovered for extremely-high volume runs on large die sizes.

We still are producing LOTS of silicon on the old 250nm process. For small volume runs and/or small transistor counts, its far more economical than a more advanced node.


RE: To each their own of course...
By Cincybeck on 3/23/2006 11:50:33 AM , Rating: 2
No flames here.. I agree they need to get the price point right and some one needs to pave an universal standard. Right now I like ageia's solution the best though, nvidia's i think will be lacking because of the price of SLi and it will probably not be on par with the performace of Ageia because their GPU's are not specialized PPU's. Some one correct me if i'm wrong.


RE: To each their own of course...
By tenguman on 3/23/2006 12:03:04 PM , Rating: 2
I'm all for making my PC gaming experience better. I've done the SLI, the Dual Core AMD processor, just got a Creative X-Fi card, and now looking to purchase this PPU but, HOW THE HELL DO I GET THIS THING TO FIT ON MY BOARD?!!? DAMNIT!!!!!


By Darth Pingu on 3/23/2006 1:13:45 PM , Rating: 2
I would simply be happy if to get things going game companies would patch their existing games to simply offload the physics processing to the PPU. They can work on improving stuff later once the technology has been made common place.

As far as balancing the physics done between people playing goes...isn't that allready done, I mean most games allready have the ability to change the physics levels dont they? So this has probably been taken into account in some way allready.

Personally I have been looking forward to the release of a physics chip for a while now, but when I first herd about the chip I was affraid of how it will be implemented and I still am. In an ideal world the physics chip will be constructed in a similar manner to a transmetta chip, so new instructions can be uploaded directly to the chip. My bigest concern is that every year or 2 a new instruction set comes out forcing you to buy a new card, that incidently wont function correctly with older games.

Long story short, the concept of a physics calculations being offloaded to a specalized chip is a great one. The implementation has the potential to scare me greatly.


I hope this product can develop market share...
By dilz on 3/23/2006 3:56:04 PM , Rating: 3
Anyone who has been around long enough can recognize the "ebb and flow" of enthusiast hardware:

3dfx and "SLI" disappeared after they went out of business and their patents were bought by nVidia. Many of us believed it would not come back, and that we would be stuck with AGP "forever." Thankfully, SLI has returned, albeit under a different marketing moniker. The end result is the same however.

en.wikipedia.org/wiki/Scan-Line_Interleave/

Aureal and their outstanding "A3D/Vortex" audio processing that was supposed to revolutioize sound in gaming - only to be bogged down by Creative Labs' litigation. They were driven out of business and their patents were bought by Creative. We have yet (I believe) to see Creative do anything with their A3D IP and are left with iteration after iteration of EAX^X reverb - a technologically inferior solution. Your choices now are between VIA's Envy and EAX^X, since Soundstorm has apparently vanished.

en.wikipedia.org/wiki/A3D

(HTML problem occurs when I try to post the real link)

According to the Wikipedia article (which I just read), it appears that Creative is finally using A3D in the latest versions of EAX. Good for them and us! (finally...)

Now we have a newcomer, a video co-processor some might say - perhaps the missing link between our current gaming siutation, and "cinemagraphic" gaming that has been promised for years. There is a chance that this technology will also be squelched in favor of a more market-friendly solution, but I hope it succeeds.

Best of luck to AEGIA!

(Be kind - first post!)




RE: I hope this product can develop market share...
By dilz on 3/23/2006 4:00:32 PM , Rating: 2
I'm a spelling nazi, but can't edit...

revolutioNize/siTUation

That is all.


RE: I hope this product can develop market share...
By Clauzii on 3/23/2006 10:12:15 PM , Rating: 2
I like Your first post :)

And one has to think that it´s been in development for ~four years, which has to be paid for. If it really kicks in, I think we´ll see prices at ~150-199$ at christmas time.


By dilz on 3/23/2006 11:37:30 PM , Rating: 3
Someone above talked about the X-Fi... I suppose that Creative is using the A3D technology in the product. I'll do some more research. I'm still waiting for discrete GPU chips that plug directly into the mobo. I guess when this happens, computers will be so much like "appliances" that much of the thrill of upgrading will be gone.

There is still the "chicken and egg" debate. With any luck, patches will be made available for existing games. If it does take off, I imagine that AMD/Intel will be somewhat peeved. The pressure will certainly be off the CPU except for the rare Azureus/Divx encoding/3d-benchmarking/SuperPI/ F.E.A.R. playing-super-mega-hardcore user.

Seriously though, people have mentioned some good points. GPU manufacturers are clammoring to get a product to market, market pricing is almost NEVER lower than predicted, and $300 is pretty steep for an unproven new technology regardless of its awe-factor.

I got burned with my A3D card. To spend $100 on a sound card in 1998 was unheard of. While I'd love to run out on buy one these cards, I won't be satisfied until it meets some *undefined* criteria/threshold in my mind. (support in X number of games, X% improvement over a system w/o it, etc..)


What else can this pretty baby do?
By lemonadesoda on 3/25/2006 3:41:49 PM , Rating: 3
Although described as a "physics processor", the term reflects its main focus, but it is not limited to this function only. The SDK will allow the implementation of code for other purposes.

Basically, the PPU is a massively parallel maths multi-co-processor (and key strength is floating point calcs not integer like most single and dual core CPU's).

If you are sitting on the fence about this card, just consider what else this pretty little baby can do for you:

1./ ZIP/RAR up to 100x faster (you are now limited by HDD performance)
2./ Encode MP3 and DivX up to 100x faster (you are now limited by HDD performance)
3./ Secure encryption on the fly
4./ Photoshop filters up to 100x faster (you are now limited by RAM and HDD performance, not CPU)
5./ If you are an office worker, speed up those EXCEL financial modelling calcs so the spreadsheet becomes "instantaneous" again.
6./ Crack passwords 100x quicker
7./ Complex CAD files rendered 100x faster

(please forgive the common use of 100x. This is purely indicative and may be higher or lower depending on how successfully the PPU can be used for these applications. No matter what the final figure, it WILL BE AT LEAST ONE ORDER OF MAGNITUDE FASTER, ie. 10x or more. Now that would need, according to Moores Law, until 2012+ before a single chip CPU could achieve that sort of performance)

ie. there is no way you could get that kind of speed up via dual or quad core CPU for the next 5 or 6 years. (unless intel/amd reintroduce the math-co-pro again)

Except for points 6 and 7 above, which are of no interest to me, I would by this card without any hesitation.... ONCE THE SOFTWARE IS THERE.

>> Moral of the story. Ageis needs to get some software out into the market ASAP so this dialog of assumptions and guesswork can be replaced with cold facts.




By Clauzii on 3/25/2006 9:11:17 PM , Rating: 2
Hopefully soon..


RE: What else can this pretty baby do?
By dilz on 3/26/2006 2:43:31 AM , Rating: 2
While I'm sure the power of a so-called "physics processor" could not doubt be harnessed for other uses, this particular product has been marketed for a specific purpose that has not yet gained popularity.

Accordingly, most of the first-generation applications I would expect this product to support would show off its ability to process physics in gaming. It is by forcing widespread adoption due to fundamental differences in game programming and execution that can be exploited only with such a solution that a user base will develop.

Since these users will be making an intial investment/upgrade rather than incremental ones, it is important that as many people purchase said product for a particular use rather than a veriety of uses. By appealing to the lowest common denominator, it should follow that the greatest number of people would step in to buy the first iteration of this product, rather than create a market for late adopters who would wait for a revision that could improve performance in the seven other tasks you have listed.

I'm not saying that an upgrade would be needed in order for improvement to be realized, but like you've mentioned, since there is no software yet, there is a chance that the market could split based on the performance offered now or later.

That said, I would love a boost to my distributed.net scores as well as any rendering, which would no doubt be possible under the conditions you describe. Hopefully our rampant discussion of possible uses as consumers will be quickly put to rest.


By Wwhat on 3/30/2006 5:25:21 AM , Rating: 2
Seeing that they keep their details secret and expose the functions through their api and sdk I wonder if anything will ever come from alternative uses, it won't be the first time a good hardware product was ruined by lack of proper support and openness :/


Any details?
By saratoga on 3/23/2006 12:57:11 AM , Rating: 2
Those previews have virtually no information about the chip. They're just summerizing the marketing info.

Saying its "30x" faster then some random CPU at some random task isn't very interesting. What kind of task? Working on what type of data? And what throughput?

Basically I'm wondering what sort of code these chips can actually accelerate.




RE: Any details?
By Clauzii on 3/23/2006 1:03:27 AM , Rating: 2
Physics calculations


RE: Any details?
By obeseotron on 3/23/2006 2:31:28 AM , Rating: 2
You can go to their site or google it, but basically lots and lots of fpu operations. Immense amounts of parallel, simple fpu stuff as I understand it. It is very specialized so it can do this one type of thing very quickly. Even the biggest Ageia booster has to realize that a PPU is not a viable long term product. They are eventually going to be bought out be nVidia or ATI. The idea is great, but in the long term physics will either be done on the gpu or a math co-processor in some future multi-heterogenous-core cpu (like cell, but more practical).


RE: Any details?
By Clauzii on 3/23/2006 9:39:25 PM , Rating: 2
And maybe not. This thing is programmable, so it can be fit for a great deal of different tasks, that demand highpower calculation.


*Flame suit on*
By hwhacker on 3/23/2006 7:56:35 AM , Rating: 2
This product is pretty cool, but I don't know if I like the marketing behind it. Sure it's nice that they give away their SDK (unlike Havok) but it shows in the price of the card...$250-300 for a 128mb pci (not -e) card. It may be impressive, but that's massively inflated...even though it's 130nm (expensively outdated). I suggest either working with someone to shrink the die to help lower production cost allowing for a cheaper product, or charging for the SDK to again, lower the price of the initial product (Although software supporting it may become more expensive). The current hardware/price just looks odd for a new product.

It could catch on, but with both ATi/Nvidia working hard on physics (with load balancing being key unlike Nvidia's current solution) and multi-core gpu designs...I can't imagine it's long before we have some kind of physics co-processor or on-die functions that will equal this on their regular gfx products, or as gfx socket option (if that ever comes to pass). If AGEIA could do it on 130nm/125M transistors with 128mb slow GDDR3 ram using the PCI bus, i'm sure somehow Nvidia/ATi will figure out a way to fit it on dualcore (dual socket?) 65nm parts (or smaller) using 512mb/1gb of GDDR4 (or faster)...cards that will probably be over 500M transistors. I'm sure they can squeeze it in, especially if you put those in crossfire/SLI.

I guess I don't get this product, at least not at this price. If the card were cheaper, or if AGEIA allowed NVDA/ATYT to buy the rights to develope the hardware on-die on their parts or as an add-on, I guess i'd be more supportive...As that seems the more logical way to go about it to me if they want a large market.




RE: *Flame suit on*
By masher2 (blog) on 3/23/2006 8:41:44 AM , Rating: 3
> "$250-300 for a 128mb pci (not -e) card. It may be impressive, but that's massively inflated...even though it's 130nm"

I'm sorry, but this is really silly. You don't judge silicon by these factors, you judge by its performance.

Given that you'd have to purchase an 16-core Xeon server (at a cost of around $100,000) to even approach the performance of this part, I think $300 is certainly reasonable.

The only issue here is software support and real-world benefits from that performance. If those exists, it'll sell. If not, it won't.



RE: *Flame suit on*
By Wwhat on 3/23/2006 11:07:21 AM , Rating: 2
Although the originaly shown asus card is indeed PCI the first card they build at ageia was a flipover design with one side PCI the other side PCI-E and seeing that BFG was only recently added to the manufacturer list who knows, maybe they will make a PCIE card instead of PCI, seeing that the conceptdesign was already made.



Fun to program?
By exdeath on 3/23/2006 10:24:27 AM , Rating: 2
Wonder if it's as fun to program as the PS2's vector units.




RE: Fun to program?
By Cincybeck on 3/23/2006 10:46:00 AM , Rating: 1
I never programmed vector units, but with this I would say it depends on how realistic you want to be. Ideally you would program each substance(whether it be wood, metal, brick, flesh =D) in the game with it's realistic physical properties down to the molecular level and then program how the game's physical forces interact and change each substance on a molecular level but leaving it open to be completely random in a way that it best reflects reality. For instance no two snow flakes are the same, or lets say rain drops never splatter the same way another one does. =D Now we're talking about some processing power.


RE: Fun to program?
By masher2 (blog) on 3/23/2006 10:54:40 AM , Rating: 2
I've done a fair amount of physics simulations on massively parallel vector-based SIMD processors. An architecture fairly similar to Cell.

Nearly always, there are library tools that shield you partially or entirely from dealing with the complexity. In many cases, you can write single-threaded code, which the compiler converts (via loop-unrolling or other techniques) into multithreaded, parallel-processing code.


Dual Core?
By tenguman on 3/23/2006 11:11:24 AM , Rating: 2
I thought dual core processors were suppose to be the "next big thing" for PC gaming? We're already seeing it on consoles (360, PS3) so why can't physics be handled on the second core of a AMD or Intel chip?




RE: Dual Core?
By masher2 (blog) on 3/23/2006 11:17:15 AM , Rating: 2
> "so why can't physics be handled on the second core of a AMD or Intel chip? "

As already stated in the thread-- this customized ASIC is far more powerful than a second core. You'd need 30 or more cores together to approach the physical modelling capabilities of the Ageia chip.


RE: Dual Core?
By Cincybeck on 3/23/2006 11:18:23 AM , Rating: 1
Technically it can but a speciallized PPU such as Ageia will do it better. They're talking 35,000 on screen objects at a time. Think about HL2 limited to what 20-30 objects at a time? I'm just throwing numbers from what I saw in game play any one have exact numbers? Compared to 35,000... it's big leap but only the beginning.. if this takes off.. Think about how much GPU's have increased in processing power since Voodoo.


It will sell if it's worth it.
By lifeblood on 3/23/2006 2:54:15 PM , Rating: 2
If the perceived benefit of the Aegia card equals or exceeds the cost to buy it, then it will sell. In the long run, Aegia itself may or may not survive, but the PPU probably will in some form or another. Its benefits seem too numerous.

For those worried about fitting the card on your motherboard, the solution is easy. Have a baby. Since the birth of my daughter I'm only allowed to play after she's in bed, and then only with head phones. No need for EAX, surround sound, or anything else that requires a fancy add-in sound card, on board audio works just fine. The physics card can go where the audio card use to. Problem solved.




RE: It will sell if it's worth it.
By Clauzii on 3/23/2006 10:07:52 PM , Rating: 2
Nice, creative solution to the problem - and fun too ;)


More than one?
By Clauzii on 3/22/2006 11:56:35 PM , Rating: 2
I am wondering if it is possible with more than one card in a machine at a time?




Shupoopie
By ComatoseDelirium on 3/23/2006 7:05:08 AM , Rating: 2
So.. how exactly do we read specifications on PPU's?

It's reasonable to think their will be settings, and the ability to overclock, the design looks promising, im glad its on pci.




Needs Havok support
By DigitalFreak on 3/23/2006 3:43:24 PM , Rating: 2
Considering that Havok seems to be the engine of choice among the big sellers (HL2, Oblivion), any PPU solution is going to need Havok support in order for it to take off. The Unreal3 engine appears to be the only big win for Ageia/PhysX.




Save Power!!
By Clauzii on 3/23/2006 9:20:53 PM , Rating: 2
The amount of, allthough specialized power, will be way up there :) How much power does 6 top CPUs use? 4-500 watt? An SLI or Crossfire card ~80 watt.

Ageia - ~20watt :)

Question: How many GFlops for these type of calculations does current GPUs deliver?




Moderated
By adis on 3/23/06, Rating: -1
"Nowadays, security guys break the Mac every single day. Every single day, they come out with a total exploit, your machine can be taken over totally. I dare anybody to do that once a month on the Windows machine." -- Bill Gates

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki