Print 29 comment(s) - last by tastyratz.. on Nov 22 at 11:52 AM

PCIe connectors  (Source: IBM)
New spec offers much higher bandwidth

PCI Express brought us much better performance for all sorts of add-in cards including better graphics bandwidth. Not too long after the original PCIe specification surfaced, we got the 2.0 update that brought along even better performance, again helping gamers looking for the best in video card speed.

The PCI-SIG has announced another update to the specification with the unveiling of PCI Express 3.0 specification. The base spec for PCI Express 3.0 is now available to all members of the PCI-SIG. The spec is a low-cost, high-performance I/O technology that has a 128b/130b encoding scheme and a data rate of 8 gigatransfers per second (GT/s). That is double the available interconnect bandwidth of PCIe 2.0.

The PCIe spec is also backwards compatible with all earlier PCIe specs to maintain compatibility with add-in cards on the market with new mainboards. The original PCIe specification supported 2.5GT/s and PCIe 2.0 supported 5GT/s. The PCIe 3.0 spec's 8 GT/s is sure to usher in more improvements in graphics processing.

The PCI-SIG says that a x1 PCIe 3.0 slot is capable of 1GB/s transfer rate, and that transfer rate scales up to 32GB/s on a x16 lane. You can bet once the new specification is available on mainboards we will see lots of video cards supporting it hit market.

“Each new version of the PCIe spec has doubled the bandwidth of the prior generation,” said Nathan Brookwood, research fellow at Insight 64. “The latest group of PCIe architects and designers drove the standard forward while maintaining complete backward compatibility for Gen 1 and Gen 2 devices. Rarely has a standard advanced so non-disruptively through three major evolutionary cycles. The ability to pull this off demonstrates not only the ingenuity of the Gen 3 developers, but also the insight of those who defined the earlier versions in such an extensible manner.”

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

I just want to know...
By KingstonU on 11/19/2010 10:23:39 AM , Rating: 2
Has this arrived soon enough so that we see Bulldozer/Sandy (err Ivy?) Bridge arrive in 2011 to implement it and HD7XXX/GTX-6XX use it? Maybe some nice SSD's can use it too.

RE: I just want to know...
By Barfo on 11/19/2010 11:25:09 AM , Rating: 3
who cares? current graphics cards are not limited bu pci-e 2.0 bandwidth yet.

RE: I just want to know...
By Etern205 on 11/19/10, Rating: -1
RE: I just want to know...
By Lerianis on 11/19/2010 11:36:56 PM , Rating: 2
Unfortunately, I have to agree. Applications are starting to appear that tax the SATA 3GB's standard's bandwidth.

RE: I just want to know...
By DanNeely on 11/20/2010 2:31:44 PM , Rating: 2
SSDs have been bottlenecked by sata 3GB for over a year. Sata 6B will help a lot at the consumer level, although high end server/work stations drives are going to PCIe for even higher bandwidth levels.

RE: I just want to know...
By FaaR on 11/20/2010 3:22:15 PM , Rating: 2
SSDs aren't bottlenecked even by the original 1.5gbit/s SATA standard for any operations other than straight linear transfers, which is a very small minority of the vast, vast majority of uses.

IOPS needs to go up by orders of magnitude for SATA2 to become a REAL bottleneck.

RE: I just want to know...
By Silver2k7 on 11/21/2010 7:42:32 PM , Rating: 2
"SSDs aren't bottlenecked even by the original 1.5gbit/s SATA"

Uhm yeah ok, the fastest consumer SSD's just reached i think 740MB/s top speed with 600MB/s sustainable speed..

And enterprise SSD's just hit IIRC 6GB/s.

RE: I just want to know...
By tastyratz on 11/22/2010 11:52:24 AM , Rating: 2
gigaBIT vs gigaBYTE
The sata 6 bus was saturated before it even came out, they just build the ssd to accomodate.

that 740 megabyte top speed? 5.78 gigabits. The brand new sata 6 will be saturated before it even gets a chance to take hold.

The old raptor hard drivers were known to saturate raid controllers and that's not even ssd. Sata is more of a real limitation than you acknowledge.

Not to mention the gain and advances in Sata command sets and efficiency in the protocol.

RE: I just want to know...
By YashBudini on 11/20/2010 7:54:41 PM , Rating: 1
Marketing crap, don't need it!


You must have blinked.

RE: I just want to know...
By Steelcity1981 on 11/20/2010 8:11:00 PM , Rating: 1
Um no. By the time PCIe 2.0 came out many highend graphics cards were either reaching PCIe 1.1 bandwidth limits or going beyond PCIe 1.1 limtes already. So PCIe 2.0 was much needed. No graphics card on the makert right now has come close to reaching the max bandwidth of PCIe 2.0. So no it's not needed at this point and time and won't be needed for a while to come.

RE: I just want to know...
By GuinnessKMF on 11/20/2010 12:30:38 AM , Rating: 3
You're right, they really should stop will all this worthless progress.

RE: I just want to know...
By DanNeely on 11/19/2010 11:25:19 AM , Rating: 2
Sandybridge is implementing it on the high end LGA2011 parts coming out in Q3, the LGA 1155 parts coming in January will not support it.

Between the lack of bus overclocking, the lack of PCIe3.0, and being memory bottlenecked to quadcores max LGA1155 is less suited for high end users than LGA1156 is. Hopefully Intel is going to make LGA2011's bottom tier cheaper than 1366's was in response.

By blueeyesm on 11/19/2010 12:16:54 PM , Rating: 1
“Each new version of the PCIe spec has doubled the bandwidth of the prior generation,” said Nathan Brookwood, research fellow at Insight 64.


The original PCIe specification supported 2.5GT/s and PCIe 2.0 supported 5GT/s. The PCIe 3.0 spec's 8 GT/s is sure to usher in more improvements in graphics processing.

He's not quite accurate. 5 x 2 = 10

RE: Umm...
By Nakecat on 11/19/2010 12:54:47 PM , Rating: 4

PCIe 2.0 delivers 5 GT/s, but employs an 8b/10b encoding scheme which results in a 20 percent overhead on the raw bit rate. PCIe 3.0 removes the requirement for 8b/10b encoding and instead uses a technique called "scrambling" in which "a known binary polynomial is applied to a data stream in a feedback topology. Because the scrambling polynomial is known, the data can be recovered by running it through a feedback topology using the inverse polynomial"[18] but still uses a 128b/130b encoding scheme. PCIe 3.0's 8 GT/s bit rate effectively delivers double PCIe 2.0 bandwidth. According to an official press release by PCI-SIG on 8 August 2007:

RE: Umm...
By HoosierEngineer5 on 11/20/2010 8:42:09 PM , Rating: 3
A Baud is a symbol per second. It is the rate where symbols on the medium change state. In the case where multiple bits per baud can be encoded, the baud rate can be less than the bit rate (e.g. gigabit Ethernet). In the case of PCIe, the encoding is binary (1 or zero). In order to make sure that the receiver can keep synchronization with a long string of '1's or '0's, the bit sequence is broken up into groups, and extra bits are inserted to make sure long runs don't happen (e.g. 8b/10b encoding, where 8 data bits are mapped into 10 symbols [also binary] before transmission). These extra symbols don't contain any actual information, but make the hardware much less impossible to design.

Going to a 128 bit/130 bit decreases the overhead by nearly 20%, increasing the efficiency of the medium. Thus, a baud rate of 8 Gbaud can carry (nearly) twice the bits per second of a 5 Gbaud channel, if the encoding is more efficient.

It seems the marketing people at Intel are trying to coin an new term, 'transfer'. Classically, a transfer includes all the overhead involved in moving an entire message, including preambles, checksums, encoding, etc. Cleary, PCIe 3.0 is not capable of anywhere near 10 GT/s. Nearly 10 Gbit/s, yes.

This may be to get around the inaccurate claims by the 802.11 community, who lie about their bit rates like the proverbial rug.

RE: Umm...
By MatthiasF on 11/19/2010 2:26:38 PM , Rating: 2
Transfers + encoding = bandwidth

You're only looking at the transfer numbers, you'll notice they upped the encoding as well to push the bandwidth a little bit beyond two times the previous version.

x4 slot is good enough?
By nafhan on 11/19/2010 11:28:25 AM , Rating: 2
So, now we'll only need an x4 slot for most single GPU scenarios?

RE: x4 slot is good enough?
By XZerg on 11/19/2010 12:50:02 PM , Rating: 5
what most people are not realizing about this standard is that it isn't only about higher bandwidth it offers over the pcie2 but more importantly it offers the same bandwidth at half the width. This simplifies cpus and motherboards for the mainstream where the massive bandwidth isn't the prime objective but the cost and simplicity are. With the north bridge in the CPU, to communicate to external graphics you would need more pins and the paths on the motherboards making it more expensive.

Market churn anyone?
By jabber on 11/19/2010 11:42:40 AM , Rating: 2
PCIe 3.0 looks better written on a box with a dragon on it than PCIe 2.0.

Thats the main benefit I think they are going for.

RE: Market churn anyone?
By GuinnessKMF on 11/19/2010 12:16:18 PM , Rating: 2
Market churn or not, it's progress, and it's nothing but good. There are plenty of market segments that are bottlenecked, and they will make use of this extra headroom. Additionally this allows for consumer products to use less lanes, simplifying the product and reducing cost.

Server environments will never get enough performance, it just won't happen, progress brings down cost, and brings higher end products into the mainstream.

PCI Express missing better Plug-n-play
By tygrus on 11/21/2010 8:53:55 PM , Rating: 2
I wish they could add to the spec to enable many more basic features and access to a larger amount of product information. I hate the fact that Microsoft OS has to have a valid driver before it can properly recognises a device otherwise it says unknown device, unknown manufacturer.
MS has to include so many drivers in the OS install it's crazy.

Network, graphics, Keyboard, mouse, audio, storage devices should all support basic functionality with a default driver for each class of device. All devices should be able to be queried for PCI bus requirements, device class, manufacturer, model#, description, serial#, hardware/firmware revisions.

XML format to allow extending the information and functions/features/information described while maintaining backward compatibility. Great for new EFI replacement of BIOS.

Add function to access onboard FLASH storage (appears like USB drive) when needed to access typical OS drivers. Placed there at time of manufacture, read-only or updateable?. Risk of Trojan/virus especially if allowed to be updated on another machine prior to re-use.

By gamerk2 on 11/22/2010 11:47:41 AM , Rating: 2
Network, graphics, Keyboard, mouse, audio, storage devices should all support basic functionality with a default driver for each class of device.

A lot of them do now; microsoft has a base Keyboard/mouse driver, GPU's can operate within windows in 640x480 256 color resolution without a driver installed (although Microsoft shoved this requirement down everyones throats...), etc.

What you really want is a virtualized OS, and roumers are Windows 8 is going that route. But unless you want a radical change, you need drivers to allow for unique features in different products.

Intel's rate of adoption
By FXi on 11/22/2010 7:46:53 AM , Rating: 2
Want to see Intel bias in action?

Watch the rate of adoption of PCI-e 3.0 in Intel chipsets vs USB 3.0 in Intel chipsets. Want to guess which standard is harder to implement? Want to guess which one takes years longer because Intel isn't truly "standing behind" the spec?

Good old Intel monopoly all over again.

By Visual on 11/19/10, Rating: 0
By solarrocker on 11/19/10, Rating: -1
RE: Woohoo
By spamreader1 on 11/19/2010 10:34:45 AM , Rating: 5
Ramblings of a crazy person? ^

RE: Woohoo
By solarrocker on 11/19/2010 10:38:15 AM , Rating: 3

RE: Woohoo
By CZroe on 11/19/2010 11:04:26 AM , Rating: 2
Umm, many games have always used real-time cinematics. For example, storage was limited on the N64, so games used real-time cinematics. The Gamecube version of Resident Evil 4 had real-time cinematics. The watered-down PS2 and PC ports had half the polygonal detail, but the disc was larger so they made FMV out of the GCN version's real-time cinematics. Many current generation titles do it for the heck of doing it just because graphics are good enough and is helps avoid that "disconnect" between gameplay and cinematics.

RE: Woohoo
By priusone on 11/20/2010 5:03:07 PM , Rating: 2
First game that came to my mind was Final Fantasy 7. Talk about utterly beautiful CGI. Then I saw the actual polygon build of the in-game rendered charactors. Huh? Why didn't they make the in-game ones look like the CGI flicks? Granted I was only 15 at the time.

"Folks that want porn can buy an Android phone." -- Steve Jobs

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki