backtop


Print 29 comment(s) - last by AvidDailyTechi.. on Feb 16 at 8:59 AM

50nm lowers cost and power consumption

New 50nm DDR3 DIMMs from Samsung and Elpida are entering mass production this month. The chips will feature higher densities and speeds while lowering latencies, power consumption, and costs.

Elpida's new 50nm process uses 193nm argon fluoride immersion lithography combined with copper interconnect technology, providing a 25 percent speed boost over standard aluminum interconnects. A standard chip size of less than 40mm2 means that there will be more dies produced per wafer, lowering costs once the line matures and yields are maximized.

The new chips are capable of 2.5Gb/s at a standard 1.5v, but can also be used at 1.2v up to 1.6Gb/s. Initial production will be at 1Gb densities. This enables new usage models in the mobile and server application space.

Corsair's Dominator GT 2GHz CL7 DDR3 DIMMs were shown at CES, and they may enter production this month using Elpida's latest.

Meanwhile, the world's only profitable DRAM producer is not standing still.

Samsung's own 50nm process is being used to manufacture 2Gb DDR3, and is expected to become Samsung's primary DRAM process technology this year. They claim a 60% increase in productivity over their DDR2 equivalents.

Qimonda taped out its 46nm DDR3 Buried Wordline technology in November, ahead of their internal schedule. They hope to start mass production by mid 2009.

Many other DDR3 producers are also looking to lower geometries in preparation for AMD's AM3 socket launch and Intel's Lynnfield launch, both of which will use DDR3 and accelerate market demand.

The price premium of DDR3 could drop from 100% to 10% by the time Lynnfield and Windows 7 launch together in Q3. Intel will be using DDR3 exclusively on its 32nm Westmere CPUs.

Due to a massive glut of DDR2 chips, there are almost no plans to upgrade factories to 50nm for DDR2. Instead, they will transition to surplus 65nm and 70nm equipment currently used for DDR3.

According to International Data Corporation, an IT market research firm, DDR3 sales will account for 29 percent of the total DRAM market by units sold in 2009. This will grow to 72 percent in 2011.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Faster memory? Meh...
By Denithor on 1/20/2009 9:03:24 AM , Rating: 3
Today's chips aren't bandwidth restricted like chips from a few years back were.

P4D with Netburst was highly sensitive to memory bandwidth.

The C2D/C2Q chips don't have any real problems with bandwidth as they feature a large enough L2 cache to buffer the data moving into the cores.

AMD A64/X2/X3/X4 and now Intel Core i7 got around the blockage by using an IMC (integrated memory controller) that optimizes the flow of data/work from system memory into the cache to keep the cores fed.

There's lots of benchies out there showing the lack of impact of faster memory speed on system performance (in anything beyond memory benchmarks, that is).




RE: Faster memory? Meh...
By amanojaku on 1/20/09, Rating: 0
RE: Faster memory? Meh...
By Reclaimer77 on 1/20/2009 9:53:19 AM , Rating: 2
You are confusing ram speed with memory bandwidth btw. One does not guarantee the other.


RE: Faster memory? Meh...
By ExarKun333 on 1/20/2009 10:19:05 AM , Rating: 3
+1.

Also, memory bandwith will only become more important as more cores are used (8+). That is why the IMC is very important for future CPUs, even though Intel was very successful with the existing FSB.


RE: Faster memory? Meh...
By Natfly on 1/20/2009 10:25:36 AM , Rating: 2
quote:
Today's chips aren't bandwidth restricted


quote:
The problem according to the team is the lack of memory bandwidth


Both are referring to bandwidth. Can you clarify? No one mentioned memory speed.


RE: Faster memory? Meh...
By Reclaimer77 on 1/20/2009 10:35:03 AM , Rating: 2
quote:
There's lots of benchies out there showing the lack of impact of faster memory speed on system performance (in anything beyond memory benchmarks, that is).


The OP posted this. Which is correct.

Then the other guy countered with the Daily Tech link to the Sansia bandwidth article. Frankly even linking it as if it were gospel is suspect imo.

Which is then why I said he was confusing bandwidth with speed.

Go look at the benchmarks on Crucial's "Dominator" line. The fastest ram you can get your hands on. Notice you do NOT get better system performance by having faster ram.


RE: Faster memory? Meh...
By onelittleindian on 1/20/2009 10:42:15 AM , Rating: 2
You're confused. A 4 core chip isn't bandwidth limited (as the article says). An 8 or 16 core chip is.

We'll need this faster memory when the 8-core cpus come out next year.


RE: Faster memory? Meh...
By B3an on 1/20/2009 10:30:27 PM , Rating: 2
I agree with you.

I've also found something interesting with my i7 platform and fast DDR3. If i disabled the HDD page file in Vista, so that a game uses my RAM instead of the HDD when i run out of graphics memory, then the DDR3 is sometimes fast enough to keep playable frame rates, where as with the HDD i get single digit FPS.

Take GTA4 for instance, that game needs 1.6GB graphics RAM @ 2560x1600 wth full settings - this isn't even with AA, while i have GTX 295 (896MB usable). GTA4 will actually run perfectly fine at these settings with HDD paging disabled... but if i enabled HDD paging, the game becomes completely unplayable.
With most games though DDR3 still is not fast enough, but remains a big improvement over the HDD still. So i welcome faster memory.


RE: Faster memory? Meh...
By Spoelie on 1/20/2009 11:44:32 AM , Rating: 2
Actually the OP posted 5 sentences referencing bandwidth, and just in the last one speed, talk about selective quoting. Also, looking at the context, speed was meant as bandwidth.

BTW speed with all else being equal (memory technology, width, ...) IS directly related to bandwidth. Double the speed = double the bandwidth in that case.

It looks like the only one who's being confused is you.

Faster system performance is VERY relative. Such broad reaching statements are by definition void. If I time how long it takes to open a window in WinXP on a Core i7 with either one or three channels, and come to the conclusion they are within a nanosecond of each other, did I prove bandwidth was irrelevant to system performance?


RE: Faster memory? Meh...
By William Gaatjes on 1/20/2009 1:26:36 PM , Rating: 3
That is usually the case when the test program with data fit's into the cache. When Memory needs to be accessed the execution cores are pushing wait states meaning the cannot do anything. Memory bandwith is just as important as latency is. Low latency high bandwidth is preferred. But both cannot always be accomplished. Latency can be hidden by the use of prefetchers tho. Prefetchers can load data into the cache before the data is needed. But prefetchers love bandwidth, the more the better because prefetchers are constantly are loading data upfront before it is needed.
1 of the reasons the penryns do so well while still having a FSB.


RE: Faster memory? Meh...
By icrf on 1/20/2009 1:57:46 PM , Rating: 2
What is "memory speed" if not bandwidth? Latency? "Speed" is a general term that can refer to either IMO.


RE: Faster memory? Meh...
By ekv on 1/20/2009 2:32:01 PM , Rating: 2
Usually we talk about memory 'bandwidth' and memory 'latency'. Bandwidth can be had in adbundance, it just takes a little longer to achieve. Think of a firehose, from when you turn it on till it's pouring out is not instantaneous, but when it arrives, oy! meshugge!

Latency is the harder of the two. It is why most microprocessor makers have layers of cache designed into their memory hierarchy. L1 cache is very "fast", in this case read "low-latency and high-bandwidth". The trade-off is that it is expensive and hence its size is limited. L2 cache typically has higher latency than L1, is not as expensive and hence has a larger size. And so on. There are lots of other tricks to hide latency.

When the Sandia study on multiple cores is taken into account -- which is a factor in why Amdahl's law works, if memory serves me -- you have various cores stepping on each other's cache. The more cores the worse the problem. Having a really, really large cache somewhat mitigates this. I recall reading over at EETimes, about 2 years ago, that IBM developed a 48MB dram based memory that fit in the size of a typical on-chip microprocessor cache and was nearly as "fast" (as an sram memory, where dram has higher bit density, by, like, over double). I thought at the time, and posted such, that it would've been a slam dunk for AMD to license the technology from IBM. Hey, AMD, why didn't you knuckleheads ask me about this? 8)

[Good thing Hector is hasta-la-vista-baby].


RE: Faster memory? Meh...
By larson0699 on 1/20/2009 10:04:16 PM , Rating: 2
DOMINATOR is a Corsair product, not Crucial. Well overrated and overpriced.

To be general, before K8 and Core, latency had a much greater impact on system performance, not only because of the longer instruction pipelines of earlier processors, but also the difference in clock speeds versus today. The modern processor crunches so many more instructions per clock (and with its own controller and large cache, no less) that RAM latency matters so little anymore in the grand scheme, though for everything that *hasn't* changed in microcomputer architecture, we still see plenty of applications (archives, encoding, and anything else most don't) that saturate the memory bus to the point that moving every 7 turns instead of 8 or 9 makes a huge difference.

If someone asks me to build them a system, though, I'm going for that mid-grade Mushkin or OCZ, because anything beyond that is more cost than any gain I would deem practical.


"We can't expect users to use common sense. That would eliminate the need for all sorts of legislation, committees, oversight and lawyers." -- Christopher Jennings

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki