backtop


Print 60 comment(s) - last by jtesoro.. on Feb 7 at 9:02 PM


BFG AGEIA PhysX Accelerator
The physics battle heats up

FPS Labs is reporting that NVIDIA will soon announce its intention to acquire AGEIA. FPS Labs' Stu Grubbs has no financial terms for the deal, but notes that his confidential source expects the deal to become official sometime this week.

AMD batted around the idea of purchasing AGEIA in November 2007, but considering that the company is still recovering from its ATI acquisition, that idea was put to rest rather quickly. It should be interesting to see how AMD will respond to the news if the announcement comes this week -- especially considering that AMD has already declared GPU-based physics dead.

A successful acquisition of AGEIA would give NVIDIA the firepower to go up against Intel which purchased physics software developer Havok in September.

The last time that DailyTech covered AGEIA, its PhysX 100M mobile physics processor was sharing chassis space with dual GeForce 8700M GT graphics cards in Dell's $2,700 XPS M1730 World of Warcraft Edition notebook.

If NVIDIA has its way, NVIDIA GPUs and chipsets may have an even closer synergy with AGEIA hardware in the future.


Updated 2/4/2008
Just moments after this story went live, NVIDIA sent us the official PR announcing its acquisition of AGEIA (further details will come forth on Wednesday):
NVIDIA, the world leader in visual computing technologies and the inventor of the GPU, today announced that it has signed a definitive agreement to acquire AGEIA Technologies, Inc., the industry leader in gaming physics technology. AGEIA's PhysX software is widely adopted with more than 140 PhysX-based games shipping or in development on Sony Playstation3, Microsoft XBOX 360, Nintendo Wii and Gaming PCs. AGEIA physics software is pervasive with over 10,000 registered and active users of the PhysX SDK.

“The AGEIA team is world class, and is passionate about the same thing we are—creating the most amazing and captivating game experiences,” stated Jen-Hsun Huang, president and CEO of NVIDIA. “By combining the teams that created the world’s most pervasive GPU and physics engine brands, we can now bring GeForce®-accelerated PhysX to hundreds of millions of gamers around the world.”

“NVIDIA is the perfect fit for us. They have the world’s best parallel computing technology and are the thought leaders in GPUs and gaming. We are united by a common culture based on a passion for innovating and driving the consumer experience,” said Manju Hegde, co-founder and CEO of AGEIA.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Hmm...
By retrospooty on 2/4/2008 5:39:26 PM , Rating: 5
"If NVIDIA has its way, NVIDIA GPUs and chipsets may have an even closer synergy with AGEIA hardware in the future."

If this means better physics are going to be integrated into the same card (and hopefully chip), then thats great. If not, forget it. Not many people will buy it as an extra card. The same issue Ageia faces today.




RE: Hmm...
By SirLucius on 2/4/2008 5:58:55 PM , Rating: 3
Agreed. If they just releases separate physics cards under the Nvidia name they'll run into the same exact problem Ageia is facing now. Nobody wants to buy dedicated physics processors if no games support advanced physics. But no developers will jump on the physics bandwagon if nobody has the hardware to support it.

Hopefully Nvidia will figure out a way to integrate physics into their new GPUs without significantly driving up the costs. This way more people will have physics capable hardware without having to spend more money and take up another PCI slot in their cases. Once people have the hardware, I think we'll see more developers implement physics into games.


RE: Hmm...
By TomZ on 2/4/08, Rating: 0
RE: Hmm...
By DOSGuy on 2/5/2008 1:46:22 AM , Rating: 4
A dedicated processor will destroy a general purpose processor in any test. CPUs can go quad core, and graphics cards can go SLI/CrossFire, but they will always lose out to a dedicated processor. No matter how many cores CPUs end up having, someone will devote an entire die to that task, whether it be hardware decryption for set-top boxes, or floating-point math co-processors from ClearSpeed. Dedicated processors also achieve that performance at a lower cost, and drawing less energy than it would take to reach the same performance with a bunch of general purpose processors.

Instead of adding more cores, CPU/GPU companies traditionally make their processors more competitive with dedicated processors by adding dedicated hardware to their general purpose processors. If physics are important to Intel or AMD, they could add a "physics processing unit" or special instructions designed to speed up that particular task. VIA added dedicated cryptography encryption/decryption hardware into their CPUs, and ATI and Nvidia both added hardware encryption for media content creation to their GPUs. Nvidia can now incorporate PhysX logic into their GPUs if they choose to. I won't predict what their plan is, but I can assure you that any dedicated chip that they produce, or dedicated physics processing unit that they add to their GPUs or chipsets, will blow away any general purpose processor in the same task, no matter how many cores it has.


RE: Hmm...
By kelmon on 2/5/2008 3:16:22 AM , Rating: 2
Relatively unimportant. Sure, a dedicated processor will be faster than a general purpose one but I already have 2 processors in my laptop and desktops have as many as 8 now, so I'd rather make better use of existing resources than introduce something that will almost never be used.


RE: Hmm...
By mindless1 on 2/6/2008 11:23:49 AM , Rating: 2
If that were feasible they'd do it, but have found there is not enough realtime processing power from the CPU because it was not designed for this. Since it is a feature only gamers would make much use of, it makes more sense for it to be a cost borne by video card buyers not everyone as it would be if integral to the CPU design. I suppose we could instead argue about whether there should be more forks in processor offerings as there are in video card offerings but that seems too much of a niche market for a CPU manufacturer while exactly what a video card manufacturer thrives on.


RE: Hmm...
By Visual on 2/5/2008 5:08:55 AM , Rating: 2
But the problem is, how dedicated is this thing really?
Look at graphic processors for example, they are now highly programmable and are more or less becoming general purpose.
So wouldn't it be natural that the physics processor's evolution goes the same way - you'd be trying to make it more and more universal, general, and eventually it won't be too different than a cpu or gpu.


RE: Hmm...
By qwertyz on 2/5/08, Rating: -1
RE: Hmm...
By murphyslabrat on 2/5/2008 10:53:32 AM , Rating: 5
That, my friend, was a very bold statement.


RE: Hmm...
By tastyratz on 2/6/2008 3:22:54 PM , Rating: 2
Not entirely, it was more blunt and simplified than plain bold.
Sounds like what he is saying relates to a very realistic line for Nvidia to follow. Any dedicated circuitry increases costs. There is a very strong chance that Nvidia will simply release higher end gpu's with physics processing, or stick the Aegia's chip onboard making the videocard/Physic processing unit a single offering. This will likely be shed from lower tier products to save on costs.


RE: Hmm...
By BansheeX on 2/5/2008 3:50:44 AM , Rating: 2
quote:
Nobody wants to buy dedicated physics processors if no games support advanced physics. But no developers will jump on the physics bandwagon if nobody has the hardware to support it.


Welcome to the PC model of gaming.


RE: Hmm...
By retrospooty on 2/5/2008 8:35:13 AM , Rating: 2
To an extent maybe... With a small 3rd party startup like ageia using a proprietary hardware/software package that is true. But in the past Open GL, and currently DirectX prove that theory wrong.

By a large margin, the best graphics have always been PC, and still are today.


RE: Hmm...
By Omega215D on 2/4/2008 5:59:09 PM , Rating: 2
Imagine nVidia integrating it into a chip on the motherboard like Soundstorm in the southbridge now that would be pretty cool. Maybe a dual core GPU: one for graphics purposes and the other for physics. However they do it I hope it becomes a real product before games like Alan Wake. =D


RE: Hmm...
By StevoLincolnite on 2/5/2008 8:23:31 AM , Rating: 2
Or Integrating it into the GPU, and have it enabled in Hybrid SLI. - Boon for laptop users as well, with no additional cost to the end users, hopefully.


RE: Hmm...
By AggressorPrime on 2/4/2008 8:02:27 PM , Rating: 2
G92 Transistor Count: 754 million
PhysX Transistor Count: 125 million (~17%)

Yes, putting the PhysX PPU on the same package as the G92 would be worthwhile (like Intel's placement of GPUs on future low end CPUs). All nVidia needs to do is make the PhysX 65nm instead of 130nm. Then PhysX will automatically be included with every nVidia GPU as another feature that AMD doesn't have.

Integrating the PPU elements into the GPU die may take a little more work though.

Hopefully this will mean PPUs, or at least the elements that make them up, become more powerful and more popular leading to much more realistic gameplay physics.


RE: Hmm...
By Polynikes on 2/4/2008 9:05:37 PM , Rating: 1
I'm afraid they'll make the PPU standard on all enthusiast video cards, driving up prices, when most people won't want one on a video card. Very few games use PhysX as it is.


RE: Hmm...
By retrospooty on 2/4/2008 11:11:21 PM , Rating: 4
" Very few games use PhysX as it is."

Chicken and egg... Very few games use it, because very few people bought it. Developers lose time and money programming features that are used by a small percentage of the market. This is why games arent released on MAC's. The hardware has always been similar and capable, but its just too small to bother with and still make a profit. If it were standard on Nvidia chips, developers would use it to death.


RE: Hmm...
By roadrun777 on 2/4/2008 11:28:30 PM , Rating: 2
I think your making alot of assumptions here.
I suspect the reason behind these decisions has to do with the programming interface and parallel cores problem.

If you could virtualize several of the standard physics functions and tie that in to the shader, you could create a programming kit that would allow programmers to concentrate more on the game experience, rather than the high math involved, in writing complex shader code, for effects that could be achieved with a single line of code, using a physics processor.


RE: Hmm...
By roadrun777 on 2/4/2008 11:37:26 PM , Rating: 2
Oh, and as far as it costing more to manufacture and driving up the price. I don't see that happening. The actual cost to produce these components is minuscule to the cost of doing business. If you just concentrated on manufacturing costs and materials that 400$ board costs around 25$ to make (including labor). Once you add in licensing, research and development, and the standard bureaucratic pork, then, and only then, do you get these inflated costs to manufacture.


RE: Hmm...
By Polynikes on 2/5/2008 4:22:16 PM , Rating: 1
quote:
The actual cost to produce these components is minuscule to the cost of doing business.

So why do many Apple products carry such price premium when comparable products are much cheaper?

It doesn't matter how much it costs Nvidia, it matters how much it costs me.


RE: Hmm...
By mindless1 on 2/6/2008 11:33:09 AM , Rating: 2
Then you'd be wrong. Increasing die size and power consumption is a primary differentiation between inexpensive and expensive video cards. All the addt'l R&D, including developing the software/driver will have inherant costs as well.

All these ideas about manufacturing and other business costs don't change the fact that one video card ends up costing multiple times what another does. "Driving up price" is a relative concept though, depending on who's deciding it's not worth their money.


RE: Hmm...
By TimberJon on 2/6/2008 3:12:44 PM , Rating: 2
Not to mention it would simplify and reduce the cost of production for Mobo manufacturers. Perhaps a little more engineering on the architectural side, but as far as an additional slot and its waypoints, thats just one less component.


"Mac OS X is like living in a farmhouse in the country with no locks, and Windows is living in a house with bars on the windows in the bad part of town." -- Charlie Miller

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki