backtop


Print 43 comment(s) - last by superdynamite.. on Nov 30 at 2:32 PM


The test chips Rambus will use in its demonstration  (Source: Rambus Inc.)
Rambus plans to deliver huge leap in memory bandwidth by 2011

Rambus Inc. plans to announce this Wednesday a new memory signaling technology initiative targeted at delivering a Terabyte-per-second of memory bandwidth, which the company touts as a solution for next-generation multi-core, game and graphics applications.

Rather than simply increasing the clock speed of memory to achieve higher output, Rambus looks to boost bandwidth with a 32X data rate. Just as DDR memory technologies doubles transfer on a single, full clock signal cycle, Rambus’ proposed technology is able to data at 32 times the reference clock frequency. With 32X technology, the memory company is targeting a bandwidth of 16Gbps per DQ link with memory running at 500MHz. In contrast, today’s DDR3 at 500MHz achieves a bandwidth of 1Gbps.

“We're really excited about the Terabyte Bandwidth Initiative and the technologies that we've developed,” said Steven Woo, a senior principle engineer at Rambus. “The work of a large team of our scientists and engineers is pushing memory signaling technology to new levels of performance.”

Of course, it requires a little explanation on how a technology that enables a DQ link 16Gbps of bandwidth could result in a Terabyte of throughput. Rambus’ aim for the technology is to grant Terabyte bandwidth to a system on a chip (SoC) architecture, and such may be achieved with 16 DRAMs operating at 16Gbps, 4-bytes wide per device.

Another innovation that Rambus plans to integrate into its Terabyte memory initiative is FlexLink C/A (command/address), which the company claims is the industry’s first full-speed, scalable, point-to-point C/A link – with the C/A running at full speed along with the DQ. FlexLink C/A also simplifies the interface between the memory controller and DRAM. For example, traditional legacy interfaces may require a 12 wire interface, FlexLink C/A can operate point-to-point with just two wires.

Furthermore, FlexLink C/A is named for its flexibility given to system designers, as now the overhead wires freed from the FlexLink C/A interfaces may be devoted to more data wires. Conversely, the model may offer greater bandwidth with the addition of more FlexLink C/A wires, making the technology more easily scalable.

Rambus’ Terabyte bandwidth initiative will use a fully differential memory architecture, which will employ differential signaling for both the C/A and DQ. While current DDR3 and GDDR5 memory use differential signaling for data and strobe, Rambus aims for full differential at the DQ and C/A. Advantages of going full differential include better signal integrity, especially due to its suitability for use in low-voltage electronics, such as memory.

While this Terabyte bandwidth memory method isn’t slated for market until 2011, Rambus has recently received early silicon capable of demonstrating its technology. The early test rig uses emulated DRAM chips, connected to a Rambus memory controller at a 32X data rate capable of 64Gbps. Rambus will show its silicon test vehicle this Wednesday at the Rambus Developer Forum in Tokyo, Japan.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Wondering...
By drank12quartsstrohsbeer on 11/26/2007 10:15:32 AM , Rating: 2
Why can't the memory used on video cards be used for main system memory? From the numbers it appears to be a whole lot faster.

What's the downside? Price? Power/Heat? Latency?

I think products such as the GTX ultra and Raptors show that price and heat are not an issue, if a tangible speed increase exists.

Yeah, Intel/AMD would have to create a new memory controller, but they are doing that anyway for their future video card products.




RE: Wondering...
By mackx on 11/26/2007 10:59:59 AM , Rating: 1
isn't that bandwidth at least partially dependant on the fact that the RAM and the GPU itself are so damn close together? i would imagine if you introduced distances like from a CPU to the system RAM you would have problems.


RE: Wondering...
By DeepBlue1975 on 11/26/2007 12:08:31 PM , Rating: 5
Graphics processors are massively parallel, and can benefit from a stupidly high bandwidth.
And, also, they process a lot of multi megabyte images every second, so they also need to move data very fast.

CPUs, on the other hand, are not so parallel and normal PC applications are not designed to make heavy use of multi threading.

Thus, in GPUs, every time you overclock the memory frequency, in the higher resolutions / anti aliased modes you get a noticeable improvement, while in CPUs, to get any kind of benefit from a higher bandwidth, you have to overclock the CPU's clock through the roof to take advantage of it.

Just think that every time Intel or AMD released a chip of the same family and frequency, but with different FSB / HTT link speed, at stock speeds you couldn't notice any performance gain, not even using super expensive, ultra high bandwidth ram.

On the other hand, CPU usage patterns generally dictate that they have to fetch small quantities of data, very often. And in those situations lower latency helps. But bandwidth does not.


RE: Wondering...
By drank12quartsstrohsbeer on 11/26/2007 2:10:25 PM , Rating: 2
Are you suggesting that increasing the system memory's speed does not improve performance?


RE: Wondering...
By DigitalFreak on 11/26/2007 3:24:15 PM , Rating: 4
For the most part, yes. Read some of the articles on Anandtech regarding upgrades to the C2D FSB, for example.


RE: Wondering...
By drank12quartsstrohsbeer on 11/26/2007 5:38:10 PM , Rating: 4
groovy. I still have some 30 pin simms laying around. time to put them to work.


RE: Wondering...
By DeepBlue1975 on 11/29/2007 8:16:21 AM , Rating: 2
Yep, it's like the diminishing returns theory.

You can find lots of articles through CPU history, when some chip maker launched a new CPU from the same family and frequency than another existing model, but sporting one single difference: faster FSB Speed (or, for what matters, official support for faster memory modules when using a 1:1 ratio for mem / FSB).

The performance is always virtually the same. At least at stock speeds, when overclocking things can change a bit, though... But just a bit, you can usually gain somewhere from 1 - 5% in performance when using a heavily overclocked CPU if you also overclock memory speed.


RE: Wondering...
By ChronoReverse on 11/26/2007 12:22:36 PM , Rating: 2
It's interesting. The advent of PCI-E (full duplex channels to the video card) and DX10 in Vista was supposed to allow for that since it (gpu memory) was supposed to be fully virtualized and available.

Somewhere along the way, things broke down (Nvidia decided they didn't want to implement that) and it was moved out of DX10. I believe that DX10.1 has this again (but of course DX10.1 "doesn't matter").


RE: Wondering...
By scrapsma54 on 11/26/2007 5:28:41 PM , Rating: 1
Cpu's are more about number crunching, not Input. However, dual core computers allow for better synthesis such as Low Data input but High data output. They are more vulnerable to latency. Intel Core 2 Dup Processors remedy this and allow larger chunks of data to be processed in few clocks as possible. GPU's are not vulnerable to latency because GPU's can take each channel and process information In huge chunks, and latency is not important because the human eye won't notice these changes in latency. CPU's are more for Physics and ai, however less demanding on memory resources, are demanding on cpu resources. Ever gamed on single core computer and you will notice that on games Like Gears of War where the physics parameters are decoupled from the Frame rate, the frame rate of the object is slow while the game runs as normal. Dual core relieves the timing involved so the cores don't have to wait for information to pass through to move on.


RE: Wondering...
By Hoser McMoose on 11/26/2007 9:12:46 PM , Rating: 3
Some reasons:

1. Trace length. Memory on a video card is usually within a few cm of memory controller, on a motherboard it's often up to 20cm away. This limits the speeds at which you can effectively transfer data.

2. Multidrop buses, yucky for transmission of data, similar to the above.

3. Price vs. performance. With the two above you're looking at more expensive and lower performance parts than on video cards. Now add in that the benefit for a general purpose CPU is MUCH less than for a video card. In fact, today's CPUs benefit relatively little from increased bandwidth. The amount of extra performance you would get would be small, cost would be high.

Video cards eat memory bandwidth for breakfast, CPUs generally do not. The biggest limiting factor on CPUs is still usually latency, not bandwidth. Being able to blast a kazillion bytes a second does no good when all you need is a single 32-bit int value. Being able to get that int value in 3 clock cycles (cache) vs. 300 clock cycles (main memory) does do some good.


RE: Wondering...
By MVR on 11/27/2007 4:28:44 AM , Rating: 3
First of all, I know this has been stated. Buy I feel Rambus has some seriously bad Karma and even after being spanked by the industry, they should absolutely not be allowed to corner any market technology or surprise anyone with a monopolistic move after some technology of theirs is adopted by all. Keep an eye on these guys.

Ok, second.. Great news on memory speed. Does this mean Micron and similar companies will be able to fab this memory type without being sucked into huge licensing fees?

In reply to your comment about video cards accessing system memory.. I have a second idea. I feel there should be a standard for a "secondary shared memory pool" that the BIOS recognizes and presents to the system as a public resource for devices such as raid controllers, graphics cards, windows virtual memory, etc. My idea is that you put the high performance spendy ram on your motherboard, and utilize mass amounts of cheap ($25/GB) ram on a PCIe card.

http://forums.anandtech.com/messageview.aspx?catid...

http://forums.anandtech.com/messageview.aspx?catid...


"And boy have we patented it!" -- Steve Jobs, Macworld 2007











botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki