backtop


Print 10 comment(s) - last by Furen.. on Feb 21 at 12:06 PM


24 times capacity is just the beginning

Serial high-speed data paths
Memory constraints are paving way for a new way to connect memory to chipsets using serial technology

In the server space, having lots of memory is crucial to the performance of information serving machines. These days, with increasing demand for information faster and in greater amounts, the amount of RAM a server can hold vs. the speed at which the RAM can operate at is a tough compromise. With current parallel technology, there is an enormous amount of engineering involved in getting a chipset to communicate with memory modules. Worst of all, parallel interfaces tend to run a whole lot more paths than serial.

Engineers are now facing a serious problem. Manufacturers and customers are demanding more memory -- meaning more modules. Unfortunately, more modules and memory channels means running more electrical lanes and it's becoming increasingly difficult with the limited amount of room on a motherboard (server boards or otherwise).

FB-DIMMs or Fully Buffered DIMMs alleviatethis technical problem by replacing data channels that traditionally take up a lot of room, with a small number of ultra high speed lanes connected to a buffer that's directly integrated on the DIMM. New memory modules using the new technology will have its data lines connected in serial similar to the way PCI Express lanes work. High speed lanes will carry data and leaves room for chipsets to support capacities not possible with today's registered DIMMs. In fact, FB-DIMM technology can give a chipset the capability to support up to 192GB or more and a blistering bandwidth of 40GB/sec. compare that to today's 8GB capacity for the same pin count.

At the moment, FB-DIMM is an exclusive technology to the server space because costs will be high and the consumer space just simply doesn't require the massive amounts of memory that FB-DIMM is beneficial for. Today we already have servers that will support 16GB to 32GB of memory. The initial benefits of FB-DIMM will be to drive costs down. After this is accomplished, higher capacity and greater bandwidth designs can be developed and the engineering required to do so will be easier. For memory capacities in the 8GB to 16GB range, FB-DIMM currently does not offer much of an advantage. For anything higher, the rewards are clearly substantial.

What exactly will happen with current generation DDR technology? Not much will change. For the short term, DDR2 will be the next step and by year's end, Intel and others will be introducing DDR3. FB-DIMM is an overall extension of DDR technology and not a replacement. The buffer chip itself will sit on a DDR module, acting as the communications controller between the memory chips and the system chipset via a serial link.

It also appears like memory companies with a strong focus on enthusiasts and power users will be designing and producing FB-DIMM modules for those who demand higher capacities. "FB-DIMM technology could make sense for end users, however FB-DIMMs really only make sense in system memory configurations in excess of 8gb. It may be worth noting that OCZ has dedicated a team of engineers towards taking advantage of FB-DIMM technology for enthusiast grade products," said Ryan Petersen, OCZ Technology's CEO.

Some of us here at DailyTech actually have upwards of 8GB of DDR memory in our systems. Apple's OSX for example, truly shines with lots of memory and the difference is much more pronounced than it is in Windows. Current generation games are also beginning to push desktop memory requirements further. Titles such as Battlefield 2 perform best if there is 1.5GB or more of memory in the system.

Technology in general is moving faster than it ever has, and the up curve continues to increase. The enterprise space is crying out for FB-DIMM technology and it will not be too long before enthusiasts are demanding the same.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: Latency?
By Araemo on 2/20/2006 3:37:22 PM , Rating: 2
Bingo!

This is also, as alluded to above, the main issue with RDRAM. Back in the day, people would have paid the price premium if it had been a more substantial performance increase. However, due to higher latencies, RDRAM just barely edged out SDRAM, but was nearly double the price some times(after factoring in the extra chipset/motherboard cost to support RDram). The same scenario is being seen somewhat in DDR vs. DDR2. DDR2 increased latencies to allow higher clock speeds. DDR2 won't start fully outperforming DDR1 until DDR2-600.. (DDR2-533 is good, but it merely 'ties' DDR1 in too many situations for it to be worth the extra cost, in a head-to-head comparison...)

FB-dimms will indeed increase latency. The idea is to increase clock speed enough to hide that increase. And with multiple serial 'lanes', perhaps they plan to paralellize the lanes so that independant memory requests can be serviced at the same time... which further hides latencies.


RE: Latency?
By GoatMonkey on 2/20/2006 4:00:59 PM , Rating: 2
To me the Lanes sound like parallel. It's almost like serial + some parallel, but not as much to save space and costs.

We'll see how this works out with the coming quad+ core cpus and multiple quad+ core cpus in servers. Maybe it will be ok. Probably a good bit better than RDRAM at least.





"Well, there may be a reason why they call them 'Mac' trucks! Windows machines will not be trucks." -- Microsoft CEO Steve Ballmer











botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki