backtop


Print 43 comment(s) - last by superdynamite.. on Nov 30 at 2:32 PM


The test chips Rambus will use in its demonstration  (Source: Rambus Inc.)
Rambus plans to deliver huge leap in memory bandwidth by 2011

Rambus Inc. plans to announce this Wednesday a new memory signaling technology initiative targeted at delivering a Terabyte-per-second of memory bandwidth, which the company touts as a solution for next-generation multi-core, game and graphics applications.

Rather than simply increasing the clock speed of memory to achieve higher output, Rambus looks to boost bandwidth with a 32X data rate. Just as DDR memory technologies doubles transfer on a single, full clock signal cycle, Rambus’ proposed technology is able to data at 32 times the reference clock frequency. With 32X technology, the memory company is targeting a bandwidth of 16Gbps per DQ link with memory running at 500MHz. In contrast, today’s DDR3 at 500MHz achieves a bandwidth of 1Gbps.

“We're really excited about the Terabyte Bandwidth Initiative and the technologies that we've developed,” said Steven Woo, a senior principle engineer at Rambus. “The work of a large team of our scientists and engineers is pushing memory signaling technology to new levels of performance.”

Of course, it requires a little explanation on how a technology that enables a DQ link 16Gbps of bandwidth could result in a Terabyte of throughput. Rambus’ aim for the technology is to grant Terabyte bandwidth to a system on a chip (SoC) architecture, and such may be achieved with 16 DRAMs operating at 16Gbps, 4-bytes wide per device.

Another innovation that Rambus plans to integrate into its Terabyte memory initiative is FlexLink C/A (command/address), which the company claims is the industry’s first full-speed, scalable, point-to-point C/A link – with the C/A running at full speed along with the DQ. FlexLink C/A also simplifies the interface between the memory controller and DRAM. For example, traditional legacy interfaces may require a 12 wire interface, FlexLink C/A can operate point-to-point with just two wires.

Furthermore, FlexLink C/A is named for its flexibility given to system designers, as now the overhead wires freed from the FlexLink C/A interfaces may be devoted to more data wires. Conversely, the model may offer greater bandwidth with the addition of more FlexLink C/A wires, making the technology more easily scalable.

Rambus’ Terabyte bandwidth initiative will use a fully differential memory architecture, which will employ differential signaling for both the C/A and DQ. While current DDR3 and GDDR5 memory use differential signaling for data and strobe, Rambus aims for full differential at the DQ and C/A. Advantages of going full differential include better signal integrity, especially due to its suitability for use in low-voltage electronics, such as memory.

While this Terabyte bandwidth memory method isn’t slated for market until 2011, Rambus has recently received early silicon capable of demonstrating its technology. The early test rig uses emulated DRAM chips, connected to a Rambus memory controller at a 32X data rate capable of 64Gbps. Rambus will show its silicon test vehicle this Wednesday at the Rambus Developer Forum in Tokyo, Japan.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: Faster memory is useless
By retrospooty on 11/26/2007 2:40:16 PM , Rating: 2
"Memory isnt a bottleneck for cpus. Ddr2@667mhz with crappy latencies is almost as good as ddr3@2ghz. What does that tell you?"

You are partially correct. Yes, its true with current chipsets and CPU's memory isn't a bottleneck, but that is because current chipsets and CPU's are designed around the available memory and designed to take advantage of what is available. HAve you also noticed that a quad core penryn at 3ghz is really not much faster than a low end single core celeron at most apps? This is because memory is a bottleneck. Not to say that DDR2 or 3 specifically isnt fast enough, but the whole platform is designed around this slow DDR2/3 memory.

If you have a whole new architecture designed from the ground up, that has high bandwidth and low latency you would see large performance improvements.

With all of that said, lets not assume Rambus is the answer, they have proven to be full of spoot in the past... We have to wait and see what is available in the future, and what the price and performance is. One thing I am sure of is that Intel wont be fooled again. They will surely do a bettter ob of research this time around.


RE: Faster memory is useless
By DigitalFreak on 11/26/2007 3:28:03 PM , Rating: 2
I question your logic.


RE: Faster memory is useless
By retrospooty on 11/26/2007 4:39:12 PM , Rating: 2
Well, my overly convoluted point was in response to this comment

"Memory isnt a bottleneck for cpus. Ddr2@667mhz with crappy latencies is almost as good as ddr3@2ghz. What does that tell you?"

My point was that Ddr2@667mhz is almost as good as ddr3@2ghz because the chipset and CPU is all designed around the same slow DDR SDRAM foundation. Uproot the foundation and start anew with a much faster memory and you will see much faster results in theory. Especially with more and more core's being put onto a CPU.


RE: Faster memory is useless
By Treckin on 11/27/2007 12:38:40 AM , Rating: 2
if you're questioning bottlenecks, than the most serious bottleneck in any computer currently, even the bitmicro SSD's at 100MBS is the storage. Its always been the case that as you increase in capacity, you decrease in performance. That is why you have varying levels of cache in a computer, from L1, L2, L3, Sys Ram, HDD cache, to the ultimate storage level.
The entire computer is designed around getting information from the slower storage medium to the faster logic circuits and their dedicated and faster cache. The ultimate goal, or ideal situation, would be to have all of the computers storage available on-die. Logistically, this is currently impossible, in terms of die real estate and cost prohibition.
What we are stuck with, therefore, is developing one bottleneck at a time. The big breakthrough will come in fiber optic lanes to the CPU, which are an order of magnitude greater in terms of how much data can be simultaneously transmitted. While at that sale the resistance to the electrons is basically negligible, optical signals can carry far more data per unit of time (bandwidth) and diameter of lead.
Until then, its one level of cache to the next. Any advancement to the final link in the chain's, the storage medium, bandwidth is a greater benefit to computing as a whole.


"Intel is investing heavily (think gazillions of dollars and bazillions of engineering man hours) in resources to create an Intel host controllers spec in order to speed time to market of the USB 3.0 technology." -- Intel blogger Nick Knupffer











botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki