Rambus speaks to us about next generation FB-DIMM technology

We recently reported on the progress of next-generation memory and how specifically, FB-DIMM technology would pave the way for faster and more data efficient servers. The technology transforms the way chipsets, motherboards, and memory modules are designed. With simplicity and speed in mind, FB-DIMM tecnology moves memory interfaces from parallel techology over to a serial method, similar in the way parallel ATA moved to serial ATA.

This week, we managed to get the chance to speak to Steven Woo, a Senior Principle Engineer at Rambus. Steven spoke to us about Rambus' involvement in FB-DIMM and where the technology is headed. At IDF last week, we saw Intel Dempsey-based servers running with FB-DIMM modules.

DailyTech: What is Rambus’ involvement in FB-DIMM and how has Rambus’ technology given advantages to FB-DIMM?

Woo: Although Rambus was not involved in the definition of FB-DIMM, there are several issues addressed by the FB-DIMM standard that Rambus addressed in the past. One of the most important aspects of FB-DIMMs is that they provide increased memory bandwidth while also maintaining high memory capacity. Throughout our history Rambus has provided high memory bandwidths, so it's not surprising that Rambus recognized and addressed several challenges with achieving high memory bandwidths and high memory capacities in our previous work. Rambus has addressed difficulties with maintaining the electrical integrity of high speed links, and also realized that address and control are important as well. In our past work, Rambus has developed technologies for buffering these signals in order to achieve both high memory bandwidths and high memory capacities. This buffering includes clock recovery and regeneration, device and module selection and a protocol for routing the control and data through the buffers..

Traditional multi-drop (or stub-bus) memory topologies like those found in desktop PCs today trade off signaling speed and memory capacity (the number of memory modules on a memory channel). As signaling speeds rise, fewer memory mnodules are supported per channel. This is due in part to the capacitance and stubs that each module adds. In addition to the benefits of buffering described above, buffering (like in FB-DIMMs) helps by electrically isolating each module and changing the bus topology so that there is reduced capacitance on the data wires, and stubs are eliminated. The reduced capacitance and elimination of stubs allows data to be transmitted at higher speeds.

FB-DIMMs use differential point-to-point signaling between modules, as well as a packetized protocol for commands and data. Packetized protocols have been used in Rambus' past memory products to reduce the width (number of wires) of the bus, which eases routing and signal integrity in the memory system. Rambus also adopted the use of differential point-to-point signaling for XDR DRAMs, which will debut in the Sony Playstation 3, in part because of better noise immunity.

DailyTech: Is FB-DIMM the logical next step for the server space or are there better alternatives?

Woo: FB-DIMM is certainly a reasonable next step for server memory. As processor speeds have continued to rise, memory is becoming more of a limiting factor for the performance of all types of computing systems. While server memory systems need to provide more memory bandwidth, they cannot afford to reduce memory capacity (see Question 4 below). FB-DIMM allows memory bandwidths to increase, while also allowing memory capacities to remain high.

DailyTech: Can FB-DIMM technology be applied to other memory base technologies other than DDR?

Woo: Although I have not seen the spec, in principle there is nothing particular about being able to place buffers on a memory module (ala FB-DIMM) that requires DDR memory to be used. In practice, RDRAM or XDR could also be used on buffered memory modules. As mentioned in question 1 above, the use of buffers on memory modules was investigated in earlier work by Rambus.

DailyTech: Where do you see FB-DIMM giving the most benefit compared to what’s available today?

Woo: A big benefit I can see FB-DIMMs providing is increased memory capacity and increased memory bandwidth for servers. Without buffered architectures like FB-DIMM, server memory systems would need to run at lower speeds (compared to unbuffered architectures) in order to achieve high memory capacities. For many server applications, the most important performance metric is memory capacity. Servers that access large in-memory databases, or that manipulate datasets from many threads of computation require data to be stored in memory to avoid disk accesses that can take several orders of magnitude longer than accesses to memory. The latencies associated with thrashing pages between disk and memory can easily dominate total execution time, dramatically reducing system performance.

As mentioned above, FB-DIMMs are a reasonable solution for servers and other systems that need high memory bandwidth and high memory capacity. However, in some computing systems cost, memory latency, and power are more important concerns than high memory capacity. For systems in which memory capacity is not a primary concern, the additional buffer chip adds cost, power, and latency, which runs counter to the primary concerns of these systems. End

Clearly there is a lot of potential with FB-DIMM technology which can be used going forward with any type of memory, whether it is DDR2, DDR3, or the next-generation.

"Intel is investing heavily (think gazillions of dollars and bazillions of engineering man hours) in resources to create an Intel host controllers spec in order to speed time to market of the USB 3.0 technology." -- Intel blogger Nick Knupffer
Related Articles
The Future of DDR Memory is Serial
February 20, 2006, 1:05 PM

Most Popular Articles

Copyright 2018 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki