backtop


Print 44 comment(s) - last by Pwnt Soup.. on Jan 18 at 7:59 PM

Hail, new PCIe motherboards of twice the bandwidth!

This article was first published on HWUpgrade.com.

Yesterday the PCI Express Special Interest Group, also known as PCI-SIG, announced that it finalized the PCI Express 2.0 specifications. The specifications initially entered the release candidate stage a little over three months ago.

The new PCI Express 2.0 Bus Specification doubles the interconnect bit rate from 2.5 GT/s to 5 GT/s. PCI-SIG describes this bandwidth hike as “by far the most important feature of the PCI Express 2.0 specifications.” Like ethernet, 20% of all signalling on PCIe is dedicated to overhead.  Thus for every 10 bits transfered, 8 bits (or 1 byte) are actual data.  Thus, doubling the interconnect bit rate increases the aggregate bandwidth of a single PCI Express x16 slot to 16 GBps.

When asked about the cost effectiveness of PCIe 2.0, a PCI-SIG representative claimed "A PCI Express 1.1 x8 link (8 lanes) yields a total aggregate bandwidth of 4GBytes/s, which is the same bandwidth obtained from a PCI Express 2.0 x4 link (4 lanes) that adopts the 5GT/s signaling technology. This can result in significant savings in platform implementation cost while achieving the same performance level. Backward compatibility is retained as existing 2.5 GT/S adapters can plug into 5.0 GT/S slots and will run at the slower rate. Conversely, new PCIe 2.0 adapters running at 5.0 GT/S can plug into existing PCIe slots and run at the slower rate of 2.5 GT/S." Both 2.5GT/s and 5GT/s signaling are retained in the 2.0 specification.

In addition to the bandwidth increase, the new specifications have a number of other improvements. Dynamic link speed management has been added allowing software to control the frequency at which Express 2.0 links operate. Under the new specification, software is also notified of changes in link frequency and width. The Express 2.0 interface also implements a new feature that gives software optional controls to manage packet routing on the interconnect. The power limit can now also be redefined in order to accommodate devices that consume higher power.

PCI-SIG outlines the new features as:
  • Enhanced Completion Timeout Control, which includes required and optional aspects, reduces false timeouts and increases the ability to ‘tune’ the timeouts.
  • Function Level Reset and Access Control Services, giving enhanced robustness and support of certain IOV features -- though this feature is labeled as optional/
  • Slot Power Limit Changes to allow for higher powered slots, which support the newer, high-performance graphics cards. This new feature works in tandem with the 300W Card Electro-mechanical specification.
  • Speed Signaling Controls to enable software to determine whether a device can operate at a specific signaling rate, which can be used to reduce power consumption, as well as provide gross level I/O to memory.
The new interface will prove to be particularly useful for video cards whose performance is limited as a result of lower I/O throughput. Manufacturers will be able to use the faster channels for shared memory graphics which uses system memory to boost performance.   


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Bandwidth limited?
By Mudvillager on 1/16/2007 7:00:04 AM , Rating: 2
Thought we weren't even close to PCIe x16's limit with the latest gen of GPUs.




RE: Bandwidth limited?
By bunnyfubbles on 1/16/2007 7:06:00 AM , Rating: 2
Well there's the systems with 2 video cards, and increasing bandwidth only opens up room for even more.

Of course I'd rather see something like a PPU help take advantage of increased available bandwidth...


RE: Bandwidth limited?
By Schadenfroh on 1/16/2007 7:42:51 AM , Rating: 5
Now people can have 8 X Geforce 8800's to play CounterStrike 1.6 with!


RE: Bandwidth limited?
By ZeeStorm on 1/16/07, Rating: 0
RE: Bandwidth limited?
By Aeros on 1/16/2007 4:36:55 PM , Rating: 2
"CS1.6 with max everything and still was getting 400FPS"

Source engine maxes out at 250fps.


RE: Bandwidth limited?
By VooDooAddict on 1/16/2007 5:54:50 PM , Rating: 2
[q][q]"CS1.6 with max everything and still was getting 400FPS"[/q]

Source engine maxes out at 250fps.[/q]

CS 1.6 isn't Source ... you are thinking of CS:S ... but yeah ... it's pretty ridiculous ;)


RE: Bandwidth limited?
By jimmy43 on 1/17/2007 2:33:26 AM , Rating: 2
I get rougly 700Fps in starcraft and i love every frame of it.


RE: Bandwidth limited?
By VaultDweller on 1/16/2007 8:07:37 AM , Rating: 4
Don't forget that PCIe isn't exclusively a graphics interface. Increasing the bandwidth per lane could be very useful for enterprise level SCSI and SAS controllers, as well as cards connected to 10Gbps network backbones.


RE: Bandwidth limited?
By AndreasM on 1/16/2007 8:10:09 AM , Rating: 2
Hypermemory/Turbocache cards are bandwidth limited.


RE: Bandwidth limited?
By OrSin on 1/16/2007 10:08:13 AM , Rating: 2
Not from the PCI-E bus.
They are bandwith limited from memory interface on the card and Motherboard. Triple the bandwith and you see no improvement. Now if you can decrease the lag from card to system and back again it would help alot. Also System memory no matter how good is up to par to video memory.


RE: Bandwidth limited?
By retrospooty on 1/16/2007 10:11:27 AM , Rating: 2
" Thought we weren't even close to PCIe x16's limit with the latest gen of GPUs."

We aren't... They are staying ahead of the curve. Same story with PCI to AGP and AGP to PCIe. The new format is always ahead of the curve, WAY ahead. This is a good thing, I wish memory was the same. DDr2 still isn't significantly faster than good DDR.


RE: Bandwidth limited?
By Furen on 1/16/2007 10:20:48 PM , Rating: 2
DDR2 isn't significantly faster than DDR1 because the CPUs have no use for the insane amounts of bandwidth. It's a CPU/application "problem" (it's not really a problem, to tell the truth, I wouldn't want Microsoft to come out and solve the problem by making applications that are memory-bandwidth-limited), not a problem with the memory itself. DDR2 offers massive bandwidth improvements over DDR1 and DDR3 will offer the same advantage over DDR2.


RE: Bandwidth limited?
By retrospooty on 1/17/2007 10:06:53 AM , Rating: 2
Massive bandwidth improvement - yes, at a massive latency penalty. Combine the two and you you get very similar performance in actual memory tests and apps that are memory sensitive.


By bunnyfubbles on 1/16/2007 7:04:15 AM , Rating: 3
3 cards in my computer and only the video card is PCI-e. Thankfully my PCI-e capable motherboard came with just enough PCI slots (2, for those that can't subtract)




By Regs on 1/16/2007 8:13:19 AM , Rating: 2
Well my single 7800 is rubbing up against my hard drive wires. So I can't imagine 3 or more.

Is ATX still a standard?


By Captain Orgazmo on 1/17/2007 4:33:09 AM , Rating: 2
I second this! Whenever I have had to buy or help someone else choose a motherboard in the last couple of years, the choice always comes down to which motherboard has the most old school PCI slots (and ones that aren't blocked by all these ridiculous multi-story video cards). Also, when are parallel ATA DVD drives going to be replaced by SATA?! I am sick wrapping up those giant, hard-to-connect, airflow-blocking ribbon cables, and I don't want to spend $20 on a round cable (hi-way robbery I tell you!). SATA DVD-RW drives tend to go for four times the price of a top of the line OEM PATA DVD-RW drive.


By code65536 on 1/17/2007 12:37:45 PM , Rating: 2
Um, have you seen Newegg lately? There are SATA DVDRWs floating around, with a very modest price premium over PATA drives (the price fluctuates, but sometimes it costs the same as a PATA). The expensive rip-off that you're talking about applies only to Plextors, but Plextor drives always command a ridiculous price premium regardless of what interface it has.


By Captain Orgazmo on 1/18/2007 3:07:07 AM , Rating: 2
I live in Canada, and unfortunately all the good U.S. online retailers (like Newegg) do not sell to us canucks. So I'm screwed for now :(


A question about PCI-e 2.0
By phatboye on 1/16/2007 8:15:28 AM , Rating: 3
So what happens if I mix 1.x cards and 2.0 cards on the same motherboard that supposrts PCI-e 2.0? Will they all drop to 1.x speeds or will the 2.0 cards run at 2.0 speed and the 1.x cards run at 1.x speeds?




RE: A question about PCI-e 2.0
By phil126 on 1/16/2007 8:27:01 AM , Rating: 4
PCI-e is a switched fabric which means a slower device should not have impact on the other devices. So a 2.0 card will run at 2.0 levels and the 1.0 will run at its max speed. Much like USB.


RE: A question about PCI-e 2.0
By jp7189 on 1/16/2007 1:29:40 PM , Rating: 2
Much the same way on a gigbit switched network.. plugging a 100Mbps card in will not slow the whole network down.


Graphics card makers....
By Chadder007 on 1/16/2007 10:34:15 AM , Rating: 2
So are Graphic card makers going to pull an AGP on the PCI Express users now and almost put a halt on making their newest products for those users?




RE: Graphics card makers....
By Goty on 1/16/2007 10:45:39 AM , Rating: 3
Ummm... no. PCI-E 2.0 isn't going to change the physical connector.


RE: Graphics card makers....
By JeffDM on 1/16/2007 5:44:38 PM , Rating: 2
I would imagine that PCIe 3.0 might still be compatible, it's just a matter of link speed. I think AGP worked pretty well because it scaled well through three revisions, 1x->2x->4x->8x. Slots and cards were usually compatible with each other as long as they were within one revision of each other. I think the main reason there were some compatibilities was because of voltage differences.


mostly hardcore gaming oriented stuff?
By DeepBlue1975 on 1/16/2007 7:30:38 AM , Rating: 2
For the most hardcore gamers, this must be good news as quad vga systems and future video card generations will benefit from this.
Just watch how much the bandwidth capabilities of video cards grow with each new generation...
With such a high BW, maybe the use of internal bridges to setup a video card array will become less of a need...

For the rest of us, the not-so-gaming ones, this really won't bring us any advantage, at least not right now.




By Kuroyama on 1/16/2007 8:37:58 AM , Rating: 2
A high end SLI system is already ridiculously expensive. I cannot imagine that many people will be able to afford a quad VGA system, as this would presumably only be made with the most bleeding edge GPUs available. I suspect the post mentioning Hypermemory/Turbocache is more likely to be correct, and us "not-so-gaming" ones may actually benefit the most as there will now be less of a loss of performance when taking the memory off the video card and forcing the GPU to use system RAM.


By theprodigalrebel on 1/16/2007 9:22:20 AM , Rating: 2
Quad-SLI was a disaster. While it did bring in respectable numbers in very high AA modes, there wasn't a single review that found it a clear winner. When four high-end cards working together (for a $1K price-tag) can't best two fast cards in SLI/Crossfire, something is wrong. Blame it on driver overhead or 'lack of optimization' but the future looks bleak for Quad SLI.

Just check out the G80 in 2560x1600 benchmarks: single card as well as SLI. Quad-SLI is a cool word but it is best if the GPU-makers resist temptation to fall for this uber-elite marketing mantra.


Power Requirements?
By HaZaRd2K6 on 1/16/2007 12:33:09 PM , Rating: 2
So the new spec will allow higher speeds along with a host of other improvements. But what about drawing more power through the bus? PCIe 1.1 only allows for a 75W draw and new video cards are hitting well over that mark these days. I get a sneaking suspicion we won't see that until we see the ATX 2.4 spec at the earliest.




RE: Power Requirements?
By coldpower27 on 1/16/2007 7:39:45 PM , Rating: 2
http://www.pcisig.com/specifications/pciexpress/ba...

quote:
Power limit redefinition – to redefine slot power limit values to accommodate devices that consume higher power


Yeah, I would like the PCIe 2.0 x16 Physical Connector be able to supply 150W, that should cover most mainstream video cards, and if the 8-Pin Physical Connector can supply another 150W, that should be plenty of power hopefully as 300W would require a video card consuming greater then 200W before they stick a 2nd Connector, Nvidia and ATI like to be safe if possible and work at 2/3 of theoretical maximums at most before they provide additional power.


RE: Power Requirements?
By phatboye on 1/16/2007 9:47:43 PM , Rating: 2
I'd rather see Nvidia and ATI make GPUs that require less electricity.


so when will actually see PCI-E 2.0?
By slickr on 1/16/2007 9:01:07 AM , Rating: 1
anyone knows the date?
i know from various sources it should be somewhere around Q2 2007 but not certain, mauby some1 here knows better!?




RE: so when will actually see PCI-E 2.0?
By theprodigalrebel on 1/16/2007 9:15:09 AM , Rating: 2
High-End motherboards based on Intel's Bearlake chipset (the top X38 variant IIRC) chipset will have PCIe 2.0 sometime in Q3 2007. But of course, in true tradition of all unreleased products, specifications/release dates are subject to change without notice. ;)

I am not sure which chipset for the AMD platform will be the first. PSU makers have already jumped on to the PCIe 2.0 bandwagon though - a few high-end PSUs are coming with the new 8-pin PCIe connectors.


By FITCamaro on 1/16/2007 5:59:22 PM , Rating: 2
Oh I'm not gonna be happy if that 8 pin connector on the R600 pic I saw turns out to be a PCI-E 2.0 connector. I just built my computer complete with a 700W PSU. I better be able to put an R600 in without having to use some silly adapter. I mean I've got 3 PCI-E power connectors but still.


because of criticism from Creative
By hellokeith on 1/16/2007 11:07:58 AM , Rating: 3
quote:
Enhanced Completion Timeout Control, which includes required and optional aspects, reduces false timeouts and increases the ability to ‘tune’ the timeouts.


I wonder if this is because of criticism from Creative that the PCIe architecture is actually worse for audio than PCI?




By ccmdratz on 1/16/2007 1:41:19 PM , Rating: 2
"[...]PCIe [1.1] architecture, worse for audio than PCI[....]"

Yeah, I was looking for a solution to the higher packet latency of the old PCIe (in some cases double over regular PCI; about 2 MS for PCIe 1.1 vs 1 MS for PCI) in this new PCIe version. Perhaps that "Enhanced Completion Timeout Control" will help.


Fully cross compatible? nice.
By Lazarus Dark on 1/16/2007 9:40:27 AM , Rating: 2
Its nice that they will be fully cross compatible. So I could buy a pcie2.0 card and plug it into my pcie1.0 mobo and it works and when I upgrade my mobo in the future, I could plug in the 2.0 card into the 2.0 slot and get full bandwidth. It's always nice when tech is forward and backward compatible even if its crippled some on older hardware.`




By marvdmartian on 1/16/2007 10:07:07 AM , Rating: 2
Yeah, but how long will it be before video cards will have such a huge power requirement that your choices will be to either have 2 power supplies, or an external video card unit, with it's own power supply, and a cable leading in to your pci-e slot adaptor??

Heck with that, just put a power supply on the card itself, and plug straight into it from the wall! ;)


PCIe2.0!!??
By Narutoyasha76 on 1/16/2007 1:22:39 PM , Rating: 2
So right now a Videocard which uses PCIe 2.0 would be like a baby guppy fish in the whole ocean...interesting.

Can't wait for the following PC specs to come out:

-nVidia Geforce 8950 SLI @ 800Mhz core, 1 GB, 2Ghz GDDR4
-Intel Core 3 (if ever) Octa @ 3.5 Ghz
-Soundblaster X-Fi X2 Dual SPU with PPU assisted surround sound
-Dedicated PPU compatible with Havok and Ageia w/5 teraflops
-Hybrid 500 GB, 10k rpms, w/ 16 GB Flash Memory integrated
-Windows Vista SP2 with support for eight cores with a single license
-100 Megabit Cable Modem
-Hybrid 3D 1080P 30" LCD Monitor

For only $1750 at the electronics store




RE: PCIe2.0!!??
By ali 09 on 1/16/2007 8:08:37 PM , Rating: 2
What!?1? That is like the crappiest computer anywhere!?!? Where's the brain linkup and the billion giga-mega-hertz optical processor? Duh!! You are so behind the times.


FPS..
By kenji4life on 1/18/2007 12:20:09 PM , Rating: 2
A little education/reminder from Wikipedia for those concerned with FPS over ~60

"When vertical sync is enabled, video cards only output a maximum frame rate equal to the refresh rate of the monitor. All extra frames are dropped. When vertical sync is disabled, the video card is free to render frames as fast as it can, but the display of those rendered frames is still limited to the refresh rate of the monitor. For example, a card may render a game at 100 FPS on a monitor running 75Hz refresh, but no more than 75 FPS can actually be displayed on screen."




RE: FPS..
By Pwnt Soup on 1/18/2007 7:59:31 PM , Rating: 2
point well taken, and yet another reason why CRT's are better too game on.


16 GB/s, not 16 Gbps
By PrinceGaz on 1/16/2007 8:09:14 AM , Rating: 2
quote:
he new PCI Express 2.0 Bus Specification doubles the interconnect bit rate from 2.5 Gbps to 5 Gbps. PCI-SIG describes this bandwidth hike as “by far the most important feature of the PCI Express 2.0 specifications.” Doubling the interconnect bit rate increases the aggregate bandwidth of a single PCI Express x16 slot to 16 Gbps.


The PCIe 2.0 x16 slot will have an aggregate bandwith of 16GB/s, not 16 Gbps.




RE: 16 GB/s, not 16 Gbps
By cheburashka on 1/16/2007 3:21:42 PM , Rating: 1
Sorry, but it's actually 16GT/s. Remember the data is 8b/10b encoded, just like ethernet.


A power to the slot
By scrapsma54 on 1/17/2007 2:14:16 PM , Rating: 2
Finally no more of those bother some, airflow killing cables.

Also I would believe that sli will utilize 16x by 16x instead of 16x and 8x. Someone correct me If I am a little off.




"Google fired a shot heard 'round the world, and now a second American company has answered the call to defend the rights of the Chinese people." -- Rep. Christopher H. Smith (R-N.J.)











botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki