backtop


Print 43 comment(s) - last by superdynamite.. on Nov 30 at 2:32 PM


The test chips Rambus will use in its demonstration  (Source: Rambus Inc.)
Rambus plans to deliver huge leap in memory bandwidth by 2011

Rambus Inc. plans to announce this Wednesday a new memory signaling technology initiative targeted at delivering a Terabyte-per-second of memory bandwidth, which the company touts as a solution for next-generation multi-core, game and graphics applications.

Rather than simply increasing the clock speed of memory to achieve higher output, Rambus looks to boost bandwidth with a 32X data rate. Just as DDR memory technologies doubles transfer on a single, full clock signal cycle, Rambus’ proposed technology is able to data at 32 times the reference clock frequency. With 32X technology, the memory company is targeting a bandwidth of 16Gbps per DQ link with memory running at 500MHz. In contrast, today’s DDR3 at 500MHz achieves a bandwidth of 1Gbps.

“We're really excited about the Terabyte Bandwidth Initiative and the technologies that we've developed,” said Steven Woo, a senior principle engineer at Rambus. “The work of a large team of our scientists and engineers is pushing memory signaling technology to new levels of performance.”

Of course, it requires a little explanation on how a technology that enables a DQ link 16Gbps of bandwidth could result in a Terabyte of throughput. Rambus’ aim for the technology is to grant Terabyte bandwidth to a system on a chip (SoC) architecture, and such may be achieved with 16 DRAMs operating at 16Gbps, 4-bytes wide per device.

Another innovation that Rambus plans to integrate into its Terabyte memory initiative is FlexLink C/A (command/address), which the company claims is the industry’s first full-speed, scalable, point-to-point C/A link – with the C/A running at full speed along with the DQ. FlexLink C/A also simplifies the interface between the memory controller and DRAM. For example, traditional legacy interfaces may require a 12 wire interface, FlexLink C/A can operate point-to-point with just two wires.

Furthermore, FlexLink C/A is named for its flexibility given to system designers, as now the overhead wires freed from the FlexLink C/A interfaces may be devoted to more data wires. Conversely, the model may offer greater bandwidth with the addition of more FlexLink C/A wires, making the technology more easily scalable.

Rambus’ Terabyte bandwidth initiative will use a fully differential memory architecture, which will employ differential signaling for both the C/A and DQ. While current DDR3 and GDDR5 memory use differential signaling for data and strobe, Rambus aims for full differential at the DQ and C/A. Advantages of going full differential include better signal integrity, especially due to its suitability for use in low-voltage electronics, such as memory.

While this Terabyte bandwidth memory method isn’t slated for market until 2011, Rambus has recently received early silicon capable of demonstrating its technology. The early test rig uses emulated DRAM chips, connected to a Rambus memory controller at a 32X data rate capable of 64Gbps. Rambus will show its silicon test vehicle this Wednesday at the Rambus Developer Forum in Tokyo, Japan.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Wondering...
By drank12quartsstrohsbeer on 11/26/2007 10:15:32 AM , Rating: 2
Why can't the memory used on video cards be used for main system memory? From the numbers it appears to be a whole lot faster.

What's the downside? Price? Power/Heat? Latency?

I think products such as the GTX ultra and Raptors show that price and heat are not an issue, if a tangible speed increase exists.

Yeah, Intel/AMD would have to create a new memory controller, but they are doing that anyway for their future video card products.




RE: Wondering...
By mackx on 11/26/2007 10:59:59 AM , Rating: 1
isn't that bandwidth at least partially dependant on the fact that the RAM and the GPU itself are so damn close together? i would imagine if you introduced distances like from a CPU to the system RAM you would have problems.


RE: Wondering...
By DeepBlue1975 on 11/26/2007 12:08:31 PM , Rating: 5
Graphics processors are massively parallel, and can benefit from a stupidly high bandwidth.
And, also, they process a lot of multi megabyte images every second, so they also need to move data very fast.

CPUs, on the other hand, are not so parallel and normal PC applications are not designed to make heavy use of multi threading.

Thus, in GPUs, every time you overclock the memory frequency, in the higher resolutions / anti aliased modes you get a noticeable improvement, while in CPUs, to get any kind of benefit from a higher bandwidth, you have to overclock the CPU's clock through the roof to take advantage of it.

Just think that every time Intel or AMD released a chip of the same family and frequency, but with different FSB / HTT link speed, at stock speeds you couldn't notice any performance gain, not even using super expensive, ultra high bandwidth ram.

On the other hand, CPU usage patterns generally dictate that they have to fetch small quantities of data, very often. And in those situations lower latency helps. But bandwidth does not.


RE: Wondering...
By drank12quartsstrohsbeer on 11/26/2007 2:10:25 PM , Rating: 2
Are you suggesting that increasing the system memory's speed does not improve performance?


RE: Wondering...
By DigitalFreak on 11/26/2007 3:24:15 PM , Rating: 4
For the most part, yes. Read some of the articles on Anandtech regarding upgrades to the C2D FSB, for example.


RE: Wondering...
By drank12quartsstrohsbeer on 11/26/2007 5:38:10 PM , Rating: 4
groovy. I still have some 30 pin simms laying around. time to put them to work.


RE: Wondering...
By DeepBlue1975 on 11/29/2007 8:16:21 AM , Rating: 2
Yep, it's like the diminishing returns theory.

You can find lots of articles through CPU history, when some chip maker launched a new CPU from the same family and frequency than another existing model, but sporting one single difference: faster FSB Speed (or, for what matters, official support for faster memory modules when using a 1:1 ratio for mem / FSB).

The performance is always virtually the same. At least at stock speeds, when overclocking things can change a bit, though... But just a bit, you can usually gain somewhere from 1 - 5% in performance when using a heavily overclocked CPU if you also overclock memory speed.


RE: Wondering...
By ChronoReverse on 11/26/2007 12:22:36 PM , Rating: 2
It's interesting. The advent of PCI-E (full duplex channels to the video card) and DX10 in Vista was supposed to allow for that since it (gpu memory) was supposed to be fully virtualized and available.

Somewhere along the way, things broke down (Nvidia decided they didn't want to implement that) and it was moved out of DX10. I believe that DX10.1 has this again (but of course DX10.1 "doesn't matter").


RE: Wondering...
By scrapsma54 on 11/26/2007 5:28:41 PM , Rating: 1
Cpu's are more about number crunching, not Input. However, dual core computers allow for better synthesis such as Low Data input but High data output. They are more vulnerable to latency. Intel Core 2 Dup Processors remedy this and allow larger chunks of data to be processed in few clocks as possible. GPU's are not vulnerable to latency because GPU's can take each channel and process information In huge chunks, and latency is not important because the human eye won't notice these changes in latency. CPU's are more for Physics and ai, however less demanding on memory resources, are demanding on cpu resources. Ever gamed on single core computer and you will notice that on games Like Gears of War where the physics parameters are decoupled from the Frame rate, the frame rate of the object is slow while the game runs as normal. Dual core relieves the timing involved so the cores don't have to wait for information to pass through to move on.


RE: Wondering...
By Hoser McMoose on 11/26/2007 9:12:46 PM , Rating: 3
Some reasons:

1. Trace length. Memory on a video card is usually within a few cm of memory controller, on a motherboard it's often up to 20cm away. This limits the speeds at which you can effectively transfer data.

2. Multidrop buses, yucky for transmission of data, similar to the above.

3. Price vs. performance. With the two above you're looking at more expensive and lower performance parts than on video cards. Now add in that the benefit for a general purpose CPU is MUCH less than for a video card. In fact, today's CPUs benefit relatively little from increased bandwidth. The amount of extra performance you would get would be small, cost would be high.

Video cards eat memory bandwidth for breakfast, CPUs generally do not. The biggest limiting factor on CPUs is still usually latency, not bandwidth. Being able to blast a kazillion bytes a second does no good when all you need is a single 32-bit int value. Being able to get that int value in 3 clock cycles (cache) vs. 300 clock cycles (main memory) does do some good.


RE: Wondering...
By MVR on 11/27/2007 4:28:44 AM , Rating: 3
First of all, I know this has been stated. Buy I feel Rambus has some seriously bad Karma and even after being spanked by the industry, they should absolutely not be allowed to corner any market technology or surprise anyone with a monopolistic move after some technology of theirs is adopted by all. Keep an eye on these guys.

Ok, second.. Great news on memory speed. Does this mean Micron and similar companies will be able to fab this memory type without being sucked into huge licensing fees?

In reply to your comment about video cards accessing system memory.. I have a second idea. I feel there should be a standard for a "secondary shared memory pool" that the BIOS recognizes and presents to the system as a public resource for devices such as raid controllers, graphics cards, windows virtual memory, etc. My idea is that you put the high performance spendy ram on your motherboard, and utilize mass amounts of cheap ($25/GB) ram on a PCIe card.

http://forums.anandtech.com/messageview.aspx?catid...

http://forums.anandtech.com/messageview.aspx?catid...


I love Rambus
By GhandiInstinct on 11/26/2007 9:31:54 AM , Rating: 1
They're a company who's purpose is to drive memory technology to the forefront. They're always working on innovation.

Their prices are high and there's virtually no market support but Rambus is the company I want to buy memory from and I just hope other companies like Intel adopt Rambus as they did for P4.

They remind me of Crytek's slogan "Maximum Game".

"Maximum Memory"




RE: I love Rambus
By Chemtype on 11/26/2007 10:19:12 AM , Rating: 1
No, Rambus is more like "Maximum Patents", and if a company developes technology which works in a similar way (which is unavoidable), they'll sue them.

Memory manufacturers have wanted Rambus to die for a long time, for good reasons.


RE: I love Rambus
By Axiomatic on 11/26/2007 2:21:11 PM , Rating: 2
Rambus = proprietary = licensing = consumers loose.

JEDEC = industry standard = free = concumers win.

'Nuff said.


RE: I love Rambus
By Hoser McMoose on 11/26/2007 9:31:04 PM , Rating: 3
quote:
JEDEC = industry standard = free = concumers win.

Free? Think again my friend! The patents still exist, they're just held by other companies! Remember Rambus WAS part of JEDEC and they aren't the only R&D company out there with memory patents.

Rambus tried to go it their own way and used some pretty slimy legal tactics to do so. Had they played by the JEDEC rules we likely would all be using some form of memory with a lot of Rambus technology and patents in it right now.


RE: I love Rambus
By Calin on 11/27/2007 3:17:05 AM , Rating: 2
There are patents on the JEDEC approved memory - yet, their use if permitted royalty free by the members of JEDEC.
RDRAM introduced memory technologies in JEDEC specifications, memory for which they had previously filled patents. After those JEDEC standards were ratified, Rambus quit the JEDEC and set light on their patents (infringed by the JEDEC technologies).


RE: I love Rambus
By Treckin on 11/26/2007 10:35:23 AM , Rating: 3
"Maximum Price"


RE: I love Rambus
By DigitalFreak on 11/26/07, Rating: -1
RE: I love Rambus
By DeuceHalo on 11/27/2007 3:18:13 PM , Rating: 2
"Maximum Lawsuits"


Faster memory is useless
By xNIBx on 11/26/2007 1:16:25 PM , Rating: 2
Memory isnt a bottleneck for cpus. Ddr2@667mhz with crappy latencies is almost as good as ddr3@2ghz. What does that tell you? Current applications dont need faster memory. Things might change a little as utilization of multicore cpus gets better and physics get implemented into games, but still, i dont think we need faster memory.

Memory isnt a bottleneck for gpus. The best gpu atm is 8800gt and it has a 256bit memory interface and gddr3@1.8ghz which is enough. Even with current technology we can make graphic cards with 512bit memory interface and gddr4@2.8ghz easily. But we dont need to do that.

So on both cpus and gpus, we can increase the memory speed by 3 times but we dont do it because we dont need it. Rdram is and always be a failure because it doesnt address consumer issues.

Even if we get both cpu and gpu on the same die, memory speed shouldnt be an issue. Not to mention that same die cpu+gpu most likely be for budget systems anyway.




RE: Faster memory is useless
By retrospooty on 11/26/2007 2:40:16 PM , Rating: 2
"Memory isnt a bottleneck for cpus. Ddr2@667mhz with crappy latencies is almost as good as ddr3@2ghz. What does that tell you?"

You are partially correct. Yes, its true with current chipsets and CPU's memory isn't a bottleneck, but that is because current chipsets and CPU's are designed around the available memory and designed to take advantage of what is available. HAve you also noticed that a quad core penryn at 3ghz is really not much faster than a low end single core celeron at most apps? This is because memory is a bottleneck. Not to say that DDR2 or 3 specifically isnt fast enough, but the whole platform is designed around this slow DDR2/3 memory.

If you have a whole new architecture designed from the ground up, that has high bandwidth and low latency you would see large performance improvements.

With all of that said, lets not assume Rambus is the answer, they have proven to be full of spoot in the past... We have to wait and see what is available in the future, and what the price and performance is. One thing I am sure of is that Intel wont be fooled again. They will surely do a bettter ob of research this time around.


RE: Faster memory is useless
By DigitalFreak on 11/26/2007 3:28:03 PM , Rating: 2
I question your logic.


RE: Faster memory is useless
By retrospooty on 11/26/2007 4:39:12 PM , Rating: 2
Well, my overly convoluted point was in response to this comment

"Memory isnt a bottleneck for cpus. Ddr2@667mhz with crappy latencies is almost as good as ddr3@2ghz. What does that tell you?"

My point was that Ddr2@667mhz is almost as good as ddr3@2ghz because the chipset and CPU is all designed around the same slow DDR SDRAM foundation. Uproot the foundation and start anew with a much faster memory and you will see much faster results in theory. Especially with more and more core's being put onto a CPU.


RE: Faster memory is useless
By Treckin on 11/27/2007 12:38:40 AM , Rating: 2
if you're questioning bottlenecks, than the most serious bottleneck in any computer currently, even the bitmicro SSD's at 100MBS is the storage. Its always been the case that as you increase in capacity, you decrease in performance. That is why you have varying levels of cache in a computer, from L1, L2, L3, Sys Ram, HDD cache, to the ultimate storage level.
The entire computer is designed around getting information from the slower storage medium to the faster logic circuits and their dedicated and faster cache. The ultimate goal, or ideal situation, would be to have all of the computers storage available on-die. Logistically, this is currently impossible, in terms of die real estate and cost prohibition.
What we are stuck with, therefore, is developing one bottleneck at a time. The big breakthrough will come in fiber optic lanes to the CPU, which are an order of magnitude greater in terms of how much data can be simultaneously transmitted. While at that sale the resistance to the electrons is basically negligible, optical signals can carry far more data per unit of time (bandwidth) and diameter of lead.
Until then, its one level of cache to the next. Any advancement to the final link in the chain's, the storage medium, bandwidth is a greater benefit to computing as a whole.


Knowing RAMBUS, though...
By killerroach on 11/26/2007 9:24:21 AM , Rating: 2
...the latencies will be through the roof and it will be out of the price range of mere mortals.

That being said, this is phenomenal stuff. That sort of bandwidth on a GPU would be a sick, twisted, surreal, but glorious experience.




RE: Knowing RAMBUS, though...
By retrospooty on 11/26/2007 9:45:06 AM , Rating: 2
Hopefully Intel learned its lesson and won't do it unless they can do it at a competitive price/performance (I assume they did learn since they dumped Rambus RDRAM and said it was a mistake)


RE: Knowing RAMBUS, though...
By Gravemind123 on 11/26/2007 9:08:04 PM , Rating: 2
Don't forget the heat, RDRAM got so hot I had a friend burn himself on the heatspreader covering one of his RIMMs!


Uh-Huh...
By AstroCreep on 11/26/2007 11:34:18 AM , Rating: 2
...wake me when it really happens.
It's good to see they have their next three-years planned out though (and those plans don't include 'Bankruptcy' yet).




Rambus...
By Creig on 11/26/2007 12:17:44 PM , Rating: 2
The second time's the charm?




Faster memory is useless
By xNIBx on 11/26/2007 1:16:19 PM , Rating: 2
Memory isnt a bottleneck for cpus. Ddr2@667mhz with crappy latencies is almost as good as ddr3@2ghz. What does that tell you? Current applications dont need faster memory. Things might change a little as utilization of multicore cpus gets better and physics get implemented into games, but still, i dont think we need faster memory.

Memory isnt a bottleneck for gpus. The best gpu atm is 8800gt and it has a 256bit memory interface and gddr3@1.8ghz which is enough. Even with current technology we can make graphic cards with 512bit memory interface and gddr4@2.8ghz easily. But we dont need to do that.

So on both cpus and gpus, we can increase the memory speed by 3 times but we dont do it because we dont need it. Rdram is and always be a failure because it doesnt address consumer issues.

Even if we get both cpu and gpu on the same die, memory speed shouldnt be an issue. Not to mention that same die cpu+gpu will most likely be for budget systems anyway.




When I think of rambus.
By SavagePotato on 11/26/2007 6:44:14 PM , Rating: 2
All I ever think of when I hear the name Rambus, is the company that thought it was going to get away with charging $900 for 128 megabytes of ram, in a day when SD was going for less than $200 for the same amount.

They figured it was the highway to gold by wrangling their way in with Intel supporting only Rambus. I would say they were partially responsible for the success of the original Athlon. Only third party chipsets offered sd support at the time for Intel.




change
By Silver2k7 on 11/29/2007 7:12:15 AM , Rating: 2
So they are going for innovation or something like it instead of lawsuits.. thats nice for a change.




Old News!
By superdynamite on 11/30/2007 2:32:51 PM , Rating: 1
They should have just bought a PS3.




Infringment?
By AlvinCool on 11/26/07, Rating: -1
RE: Infringment?
By Calin on 11/26/2007 9:14:01 AM , Rating: 2
Memory using technologies patented by Rambus is already used - XDR in Playstation3, by example.
Rambus developed (or tried to) fast memory with low pin count. While on PCs (with their 168-pins SDR module) pin count is not a problem, in small devices this space comes at a premium.


RE: Infringment?
By Screwballl on 11/26/2007 10:29:00 AM , Rating: 1
Rambus is another patent hog, get a generalized patent on something like this and when another company in the future claims something similar they sue them... Rambus is one that needs to die and die quickly... but history proves they will not... and this will stifle innovation (and double the price of a computer build due to the price they will expect)


RE: Infringment?
By ecktt on 11/26/2007 11:39:29 AM , Rating: 2
Well if that is the case, why don't the other manufactures do the R&D and beat them to the punch?


RE: Infringment?
By Murst on 11/26/2007 11:51:15 AM , Rating: 3
Although rambus has had some shady practices with patents (a certain case of creating a standard that relied on their own patents comes to mind), calling them patent hogs is idiotic.

These people have a R&D department. They are one of the primary reasons why memory is where it is today. Sure, someone else would have done the same with time, but RAM manufacturers have pretty much shown they don't care much for innovation when it comes to RAM.

If you were to look at all the major advancements in RAM, they pretty much came from either rambus or IBM. It wasn't the chip makers, yet people seem to think that the chip makers would take over if Rambus were to fold.

At least there's a company out there that will research it. It takes a lot of money to develop and test these technologies. There is also a lot of very smart people that need to be hired to make this happen. Yet some people think that Rambus should be giving this out for free.


RE: Infringment?
By Screwballl on 11/26/2007 12:46:49 PM , Rating: 2
The reason for my comment was that Rambus places generalized patents out there so either the company has to pay Rambus or they get sued. Other companies do have their own R&D but Rambus holds so many of these patents that it is impossible to make something better without crossing into one of Rambus' generalized patents.


RE: Infringment?
By Fritzr on 11/26/2007 11:52:31 AM , Rating: 2
Shouldn't affect the price of mainstream PCs. Will give the developers of DDR, a goal to aim at :) Terabit bandwidth with 32bit or 64bit transfer is now on the horizon :)

It really speeds things up when the goals are defined.

The hidden costs of Terabit bandwidth include.
4bit wide memory--16 memory modules per bank for 64 bit data
(32bit DDRx requires pairs on 64bit memory motherboards for this reason)
RDRAM requires blank modules in unused memory slots (minor issue, but was made to seem serious when new)
RDRAM latency increases with the number of modules installed. (This one is a major issue for performance)
RAMBus charges relatively high prices for both memory they manufacture and per unit licensing. This was the primary reason that RDRAM systems failed in the market. In a properly designed system, latency was not a big problem when comparing performance to the competition at the time RDRAM came out.
RAMBus loves patent lawsuits...expect that history to drive development of similar performance using an unrelated architecture. Highspeed DDR owes a lot to the original RDRAM for promoting interest in highspeed memories :)


RE: Infringment?
By DigitalFreak on 11/26/2007 3:32:05 PM , Rating: 2
quote:
highspeed memories :)


I prefer highspeed mammories, but to each their own.


RE: Infringment?
By Calin on 11/27/2007 3:13:59 AM , Rating: 2
"RDRAM requires blank modules in unused memory slots"
You mean if one of my RIMMs goes bad, and I've misplaced the blank memory modules that came with the mainboard, I can't run in reduced memory configuration? And until a new pair of memory modules comes, my computer is bricked?
Yay
"In a properly designed system, latency was not a big problem when comparing performance to the competition at the time RDRAM came out."
When RDRAM came out, system performance was equal to high speed SDRAM (on Pentium3). And RDRAM cost was almost 10 times higher than the said SDRAM.
Pentium4 made good use of the added bandwidth (and its architecture somewhat negated the increased latency). The prices for RDRAM were lower, and SDRAM needed two channels to be competitive. Yet, SDRAM was the cheaper option.
Dual channel DDRAM blew away any hope of competitiveness for RDRAM in desktop space.


RE: Infringment?
By Hoser McMoose on 11/26/2007 9:16:28 PM , Rating: 4
To be fair to Rambus they seem to have kicked the lawyers out, or at least into another building. With XDR and this new technology it looks like they're really gotten back to doing honest-to-goodness R&D work for their money.

Unfortunately for the engineers there though their innovations are never going to be fully realized. The Rambus lawyers just pissed off WAY too many of their customers (the memory manufacturers).

Who knew that trying to sue every single one of your potential customers might turn out to be a bad business plan?


"This is from the DailyTech.com. It's a science website." -- Rush Limbaugh











botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki