Print 35 comment(s) - last by exdeath.. on Aug 25 at 11:29 AM

Folding@Home, with HDR visualizations!

Future PS3 owners rejoice as Folding@Home will be available for the PS3 console. The Cure@PS3 project puts the PS3’s Cell processor to good use by folding proteins which is somewhat in line of Sony’s overconfidence of the PS3 rendering the PC useless. Folding@Home performance from the Cell processor is expected around 100 gigaflops per PS3 console and top out one petaflop with 10,000 PS3’s.

In addition to the 100 gigaflop protein folding capabilities, Cure@PS3 will have enhanced visualization features. The enhanced visualization capabilities will take advantage of HDR and ISO surface rendering features of the NVIDIA RSX graphics processor. Molecules can also be navigated in real time using the PS3’s motion sensitive controller allowing users to view the proteins from different angles.

With the PS3’s high price pushing away developers, PS3 owners will have something to put all those unused processor cycles to use.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: XBox 360?
By ZeeStorm on 8/24/2006 1:09:00 PM , Rating: 0
At least with this Sony can expect people to keep the PS3 on (possibly) overnight, and reassure people that the system won't get too hot (unlike the X360 who caught a house on a fire and/or fries itself?). Let's just *hope* that Sony thinks of this and doesn't cheap out on keeping this sucker cool.

I'm glad to see that "modders" and "hobbyists" who rebuilt the cooling system in the X360 with water cooling, that's great.

I wonder if the XNA/M$ Dev kit will give enough functionality to run Folding@Home. Then it'll stop the bickering between which system is more powerful, so users themself can see the difference instead of this "proposed cycle speed" and "processing power" that's all hype between the X360's "pc killer" (made by IBM -- keep that in mind, can we say G6s?) triple-core processor and the "wonder processor" cell (which can theoretically stack like crazy compared to most processors). I want to see facts in a real-world application that does the same thing on each system, and this gives a great possibility to do just that.

My last question/comment/statement.. Is this Folding@Home project for the PS3 native to Sony? Or were they given/bought/sold PS3 dev kits to build this stuff? I do remember Sony talking about native-Linux supported applications, so maybe it just runs from that, and they built the application on the HDD or licensed F@H to press discs? Interesting indeed...

RE: XBox 360?
By Vertigo101 on 8/24/2006 1:18:22 PM , Rating: 2
It's more like a custom G5, and you do realise that the Cell is just 1 of the 360's cores with those dang SPE's around it, right?

Also, my 360 is on almost constantly, and has never had a problem.

RE: XBox 360?
By ZeeStorm on 8/24/06, Rating: 0
RE: XBox 360?
By exdeath on 8/25/2006 11:27:47 AM , Rating: 2
Well for one the "Cell" is having problems... Going from 3.2 Ghz to 2.7 Ghz (?) and eliminating SPUs to improve yields, the peak power will by short of the claims; typical for Sony hype. Wasn't PS2 1000 times more powerful than a PC? *snicker* I'm sure it was on paper at the time it was announced, but it was dwarfed before it even came out. I could say I'm gonna have a console that is 10,000 TFlops and pwns super computers, but one small detail I'll leave out or lead you away from is that it won't be out till the year 2090 when that type of hardware will be mainstream or obselete. But it sounds good TODAY and you are too high on the hype to realize this.

Second, you are limited to what you can do on the SPEs, they have very limited instruction sets and memory access restrictions as far as I know. The overhead of breaking up tasks to that level, thread management, data synch, etc. will be high, esp. for games due to the linear dependencies and requirement frame consistency. Especially with two types of CPUS (main CPU and SPES) the main CPU will have to treat SPE threads as data blocks that are copied to SPE memory before execution, conversion of data, etc... I'd much rather have 3 REAL CPUS than 7.

Third there is a big flaw that still exists as far as I know that has to do with the main CPU reading from each SPE's 256kb memory that results in like 4 mb/sec... and I'm not sure if the SPEs can write results to main ram or push GPU packets directly, so how useful are the SPEs if you can't read the results back quickly?

And you don't need Core 2 to smoke a G5, the K8 did that well enough, even besting dual G5 in some cases. With the Xbox360 and PS3 even more so because in order to make room for those cores they removed alot of the advances in CPU technology since the 486 so while you have 3 cores at high clock speeds, they are very simplified fetch-decode-execute pipelines with no speculation, prediction, or modern scheduling hardware which makes modern PC’s as fast as they are.


I’ve been researching multithreading for gaming; The problem with threading is frame consistency. Dependant objects will read another object’s state at the same time that object’s state is being changed by another thread. At the very least synchronization is required so we don’t read values from two different states before the state change has finished, for example getting an objects freshly updated x, but the old y. The other problem is that even if the individual states are locked all or nothing, the total state of an object can change between accesses amongst other objects due to unpredictable thread parallelism. That is two cats would both see a bird sitting and flying. As any game programmer knows, this isn’t good, as a single frame should represent a fixed snapshot in time. So we have so many locks and waits on critical sections to synchronize things that we loose any benefit of thread parallelism.

The problem is similar to screen tearing: the state of the frame buffer changes in the context of a single frame, top half is the old frame and the bottom half is the new frame. We can solve the problem by applying the same concept we do with frame buffers to avoid tearing: double buffering data that must remain unchanging in the context of a single frame!

AI and physics can all be done in parallel micro threads by using data double-buffering of frame state data and pipelining updates to maintain static frame state between modules in the context of a single frame. In other words, work on frame n+1 in each entities ‘back buffer’ while entity front buffers are read-only from frame n+0 and feed back into the physics/ai whenever ‘current’ state is needed. So now while an object may be updating its back buffer, anything reading ‘current’ state for the frame gets the unchanging front buffer which remains constant throughout the frame. In this way two cats will see a bird in the walking state in that frame and both decide to attack. The bird will see both cats attacking in the next frame and fly away, and both cats will see the flying away at the same time.

This way you can have 100s of micro threads running each AI or physics computation completely independent and updating data in the hidden back buffers while the ‘front buffers’ remain constant throughout the frame. No synching or locking or waiting except between frames when you swap. Just start 100 threads in parallel and call WaitForMultipleObjects() before SwapBuffers().

RE: XBox 360?
By exdeath on 8/25/2006 11:29:41 AM , Rating: 2
"I'd much rather have 3 REAL CPUS than 7."

edit to:

"I'd much rather have 3 REAL CPUS than 7 limited ones"

RE: XBox 360?
By xbdestroya on 8/24/2006 1:38:06 PM , Rating: 2
Sony seemingly developed the software themselves.

A more comprehensive article on the matter:

RE: XBox 360?
By ZeeStorm on 8/24/06, Rating: -1
"We can't expect users to use common sense. That would eliminate the need for all sorts of legislation, committees, oversight and lawyers." -- Christopher Jennings
Related Articles

Copyright 2015 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki