backtop


Print 11 comment(s) - last by JeffDM.. on Sep 26 at 9:58 PM


Traditional PCI Express backplane technology

New switched and virtualized backplane technology
Of course compatible with VT technology too

Intel at IDF this week is talking about a new technology that will allow systems using PCI Express to take on a different approach when it comes to hardware devices. Called Multi-Root I/O virtualized PCI Express, the technology introduces switching technology into the traditional PCIe pathway.

The technology is being introduced for high-density cluster systems, usually made up of a number of blade servers. Traditionally, each blade would have its own PCIe pathway and its own devices connected to it. If a cluster contained 10 blade servers, each server would have its own gigabit Ethernet device, its own InfiniBand device and its own fiber channel device. Using this topology, management becomes difficult because each technology is individualized into separate blades.

Using the Multi-Root switching technology, each blade server connects to the same PCIe interface, and each blade (a root) will receive its own virtualized PCIe hierarchy (virtual plane). Using this technology, every blade connected to the Multi-Root switch will be able to utilize the same technology and everything can be managed easily.

Intel says one of the best aspects of the new technology is that it requires no software upgrades or modifications to system BIOSes. Operating system and driver layers can also remain unmodified. Virtualized operating systems will also be able to utilize the new virtualized I/O Multi-Root PCIe technology.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Step Backwards?
By Sahrin on 9/26/2006 1:48:49 PM , Rating: 2
I'm not saying this approach can't work or isn't cost effective - but isn't this exactly the problem with Intel's FSB approach? Not much that there isn't the bandwidth, you can always increase bit-depth or operating frequency - but that you have simply too many devices trying to make too many calls on too few resources. 10 blades with the various internal components all utilizing one interconnect to the communication channels? Isn't high-density computing about performance/watt, not price/performance?




RE: Step Backwards?
By Doormat on 9/26/2006 2:01:26 PM , Rating: 2
Thats one of the first things I thought of. Imagine 10 blades accessing a single or even two GbE ports. 2Gb/s for 10 machines? For really low priority stuff thats OK (a license manager for example) but for anything important it doesnt seem like a good idea.


RE: Step Backwards?
By Bluestealth on 9/26/2006 6:02:50 PM , Rating: 2
Considering the uplink from the network is the more limiting factor not really, also with Infiniband, it would be able to get 20 GB/s per port, which should be enough for almost anything.
I imagine computer to computer within the rack it would be using the Virtual PCIe I/O.
Some network person correct me if I am wrong :)


RE: Step Backwards?
By d4a2n0k on 9/26/06, Rating: -1
RE: Step Backwards?
By ZeeStorm on 9/26/2006 3:00:46 PM , Rating: 1
Isn't this just a bottleneck and going to cause more problems?

This seems like it will end up hurting SLI configurations more than it will ever do good. Something more simple doesn't always make it better.


RE: Step Backwards?
By Phynaz on 9/26/2006 3:07:32 PM , Rating: 5
This has got nothing to do with FSB or anything like it.

This is to have multiple VM machines running over the enterprise network.

Think of it this way: You have that nice blade system running 10 VM's, along with a seperate application on each VM. How the heck do these things talk to the network and other systems, applications?

This allows each VM to have a unique virtual network interface that is recognized by the rest of the network, i.e. the router sees 10 macs and 10 ips, the VM's each think thay have their own network card, and all traffic is sent to the correct place.

This does solve a problem with currect VM implementations.


RE: Step Backwards?
By JeffDM on 9/26/2006 9:58:38 PM , Rating: 2
I really don't see the problem if the PCIe switch operates like a crosspoint switch. The diagram may be unfortunate mistake, it probably shouldn't have merged before connecting to the PCIe switch. PCIe doesn't act like a multi-drop bus like an FSB might, PCIe is a point to point system which lends itself to crosspoint switches very well. That should mean no contention between a computer and its assigned peripheral.


Parellel?
By shabodah on 9/26/2006 2:57:45 PM , Rating: 2
As said above, isn't this stepping back from a direct connect serial system to a parellel one?




RE: Parellel?
By Goolic on 9/26/2006 9:34:13 PM , Rating: 2
I don't thuink so. From the ilustration i think all blades will still have the same PCIE bandwidht....

See like that, you have 50 blades, each has 2 chips for 10Gb ethernet... thats waht at letas $6 for chip (and more in the energy bill).... so you use that system, initialy you have only 25 chips, as your bandwidght increase you buy one more module.

If your application is more cpu than bandwidth intensise you save some 80 bucks and some thounsands of energy bills ($3 for refrigeration on every $1 power) in 8 years of use..

I thinks the energy savings are the principal. Dont care to the math... i never checked it ....


No, not backward
By Flunk on 9/26/2006 4:37:10 PM , Rating: 2
This is a good idea because it allows you to share hardware that may not be needed by every node all the time. No one is saying you need to have any less hardware than before, you can still have one network card per blade as before.

But say you want to connect to 3 of the blades at the same time via telnet. This way you just have to connect one eithernet cable to one port and you can connect to all of them instead of having to wire them all up separately.

Also this approach could allow for some interesting dynamic load balanceing. Say you have 20 blades and 20 gigbit NICs shared between them. say blades 1-10 are not handling any requests at the moment. This gives blades 11-20 twice the bandwidth to fulfill their current requests than they would have under the current arrangement.

Basically, no this will not be a hinderance to performance because if you put in the same hardware there shouldn't be a reason for decreased performance. The only issue would be in the people who spec out the system, do so poorly. Which is currently a problem anyway.




RE: No, not backward
By mino on 9/26/06, Rating: 0
"I want people to see my movies in the best formats possible. For [Paramount] to deny people who have Blu-ray sucks!" -- Movie Director Michael Bay











botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki