backtop


Print 37 comment(s) - last by outsider.. on Jul 26 at 6:07 PM


AMD's "Torrenza" platform mock up
AMD opens up the Opteron architecture to other microprocessor R&D companies

Today AMD unveiled what it calls the evolution of enterprise level computing, called Torrenza. The new platform, says AMD, will utilize next-generation multi-core 64-bit processor that have the capability to work alongside specialized co-processors. DailyTech previously reported that AMD was considering working with co-processing design firms such as ClearSpeed, to develop and design platforms that would be able to utilize specialized processors for specific duties alongside the general host processor in a traditional Opteron socket.

With Torrenza, AMD has designed what it calls an open architecture, based on the next wave of Opteron processors, which allow what AMD calls "Accelerators." Using the add-in accelerators, a system will be capable of peforming specialized calculations similar in fashion to the way we use GPUs today.

Because of its flexibility, the HyperTransport protocol allows a multitude of co-processor designs that are already compatible with systems on other platforms. For example, with Torrenza, specialized co-processors are able to sit directly into an Opteron socket, and communicate directly with the entire system. During the conference, Cray Inc. noted that it had worked with AMD to design a system where a system can contain even up to three different co-processors, all dedicated to specialized tasks.  All three processors would communicate directly with Opteron processors and the system chipset harmonously. The open-ended nature of Torrenza will allow companies to design specialized processors to plug-in and work with Torrenza-enabled Opteron systems.  Although AMD acknowledges many of these applications can run off PCIe and other connection technologies, Torrenza emphasizes HT-3 and HTX in particular.

AMD representatives said that because of the archicture, Torrenza allows very low latency communication between chipset, main processor and co-processors. According to both Cray and AMD, applications can be written in a way where all the variouis processing architectures are recognized and are fully usable. Torrenza-aware applications are on the way said Cray, but the company did admit that developing them was very much "rocket-science".


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

How handy
By Spoonbender on 6/1/2006 6:08:33 PM , Rating: 3
Sounds like what Ageia should have used for their PPU




RE: How handy
By Clauzii on 6/1/2006 6:16:55 PM , Rating: 2
Who say they can't?

This is a good opportunity for Ageia if they play their cards right (npi).



RE: How handy
By Trisped on 6/1/2006 6:58:06 PM , Rating: 2
Don't know very many people that want their $10k+ servers to run a physics co-processor.

If Ageia wanted to run in high bandwidth, low latency setups they should have used PCIe rather then standard PCI.


RE: How handy
By beemercer on 6/1/2006 7:20:15 PM , Rating: 3
Also, i dont think many consumers/gamers will be multi-socket opteron boards.


RE: How handy
By beemercer on 6/1/2006 7:20:51 PM , Rating: 3
Also, i dont think many consumers/gamers will be running multi-socket opteron boards.


RE: How handy
By wingless on 6/1/2006 8:33:28 PM , Rating: 2
Im a consumer and I think it would be a great idea for my home PC. We once all added in a co-processor called a GPU and theyre bandwidth hungry. If all our important "co-processors" had a spot on the board and a hypertransport pipe to the CPU then the processing power would be insane. AMD has thought this technology through and we all know theyre making a pretty decent decision by coming out with this tech.


RE: How handy
By Hare on 6/2/2006 4:18:41 AM , Rating: 2
Latency would improve but is not a real concern when it comes to GPUs. Besides we can't even saturate AGP or PCIe with current GPUs etc so no need to really add another socket to direct hypertransport.


RE: How handy
By Burning Bridges on 6/8/2006 8:10:50 AM , Rating: 2
AGP can and has been saturated :)

and there are reports that corssfire setups are also becoming bandwidth limited :P


RE: How handy
By peternelson on 6/1/2006 8:44:13 PM , Rating: 2

Yeah Ageia should have gone pcie from launch. Or as a minimum both pci and pcie.

I will not buy one unless/until they bring out a pcie edition.


RE: How handy
By kitchme on 6/1/2006 11:14:05 PM , Rating: 2
I agree. Or, at least they should have offered two versions (like video cards) - PCI and PCIe. But I guess to save money and to cover people that have AGPs (still a great deal), they decided to release only PCI.
I also don't understand why there isn't more stuff for all those empty 1x/4x PCIe slots.


RE: How handy
By cnimativ on 6/2/2006 12:12:40 AM , Rating: 1
probably because most mobo designers are not smart enough to account for the 2x slot space that most gfx cards use.

how many times have you seen a pcie 1x/4x cards being blocked by your grahpics cards' huge heatsink+fan?


RE: How handy
By poohbear on 6/2/2006 5:02:11 AM , Rating: 2
dude ppl still using agp are'nt gonna buy a $300 physics processor. get real. they used pci cause the bandwidth it provides is enough, simple as that.


RE: How handy
By Viditor on 6/2/2006 3:16:00 PM , Rating: 2
quote:
Don't know very many people that want their $10k+ servers to run a physics co-processor

What about their $10k+ graphics workstation?


RE: How handy
By xdrol on 6/4/2006 6:17:26 AM , Rating: 2
PCIe is anything but low latency.


RE: How handy
By Shenkoa on 6/4/2006 1:45:05 AM , Rating: 2
Faster and faster and faster and faster. I wonder if we will ever hit a wall.


RE: How handy
By tech53 on 6/9/2006 6:11:46 AM , Rating: 2
yeah. We'll solve issues, hit problems, stuck for a while, until eventually the mechanical parts near light speed and we hit the event horizon and have to start accounting for what special relativity talks about in regards to time. That will be a huge issue. When time is going at one rate on the the computers end, and a different rate at our end, how do we account for certain things, and monitor in real time? We'll have to learn how to use it to our advantage, ie do the inverse and speed time up on the computing end to enable massive leaps in technology on our end (what if you could get 100 years of processing (cpu speed and technology leaps become irrelevant at this point) done in one second?) If you don't understand cosmology please don't say I'm a quack. Go look and see what happens when you near a black hole in regards to time. Or what happens when you near light speed. We really do need to at least begin to consider different approaches as options as computing as a science is explored and exploited to it's fullest. I'll reitterate, if I could use something such as that to change our perspective on computing, hardware tech and developers as we know them would become obsolete, but how much would technology leap?
Yes this was a bit random, but I'm an astronomer and tech, so I tend to combine my knowledge of both. Heck, they've allready created a micro black hole in a lab, (incredibly bad idea I might add)we are approaching the time when things as this will be possible. Time travel is impossible. To warp timespace is. LOL I'm WAY off topic.


RE: How handy
By outsider on 7/26/2006 6:07:54 PM , Rating: 2
100 years of processing in 1 second would only get you a burned CPU in a second :)
Your thread reminds me of Prince of Persia Warrior Within. You go forward in time, everything is in ruins.
Anyway... although you don't see scientists talking much about hitting the event horizon, they are dealing with problems nowadays too. I am sure your concerns have been thought more than a million times by genius electronic engineers. Its just not worth investing in right now.
We're not talking about smth that could require decades to develop, we're talking about what right now we describe as impossible in our existence.

PS: I would never let you put a black hole in my computer to alter time :D


"I modded down, down, down, and the flames went higher." -- Sven Olsen

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki