backtop


Print 37 comment(s) - last by Spivonious.. on Jan 25 at 9:36 AM

Ashwood memory architecture allows for much faster memory speeds

Chipmakers realized long ago that extracting more performance from computer processors could be accomplished in ways other than simply reducing the size of the manufacturing process to squeeze more transistors onto a die.

One of the ways chipmakers improved performance was by building multi-core CPUs, like Intel's Penryn processors, that allow for parallel execution of data. Memory chips haven’t been able to keep up with the performance increases we are seeing in processors making for a bottleneck in the performance of computer systems and other devices.

In order to tackle this problem, a cryptographer named Joseph Ashwood has developed a new memory architecture that allows for multi-core memory.

Ashwood dubbed his memory architecture the Ashwood Architecture. According to EETimes the Ashwood architecture integrates smart controller circuitry next to the memory array on a single chip. This provides parallel access to the memory array for hundreds of concurrent processes leading to increased throughput and lower average access times.

Ashwood says, “My design borrows extensively from today's modern multicore CPUs. As far as concurrency goes, my memory architecture shares some features with Fibre Channel.”

Ashwood says his architecture can hit 16Gbytes per second compared to the DDR2 limit of 12 Gbytes per second. The hallmark of the Ashwood architecture is that the larger the number of bit cells in the memory the better the performance.

Ashwood does admit to a couple downsides to his design. The first is that his design is paper only, though it was independently verified by researchers from Carnegie Mellon University. No design was tested of the architecture at the electrical signal level.

The second drawback is that parallel access overhead of the architecture slows down access time to individual memory cells. However, Ashwood says that the parallel nature of his architecture more than makes up for any slowdowns by executing more commands at the same time.

Ashwood has filed a patent on his architecture that is still pending; until the patent is granted the intricate details of his architecture remain unknown.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: Bandwidth?
By SilentSin on 1/17/2008 11:03:26 AM , Rating: 2
I'm thinking that the logic chip that is on-board these sticks is the culprit for adding to access times. Similar to FB-DIMMs and buffered memory, adding logic only increases latency. However, this would be case dependent, a single read/write instruction might be carried out faster on lower latency normal RAM, but if you had multiple instructions to be performed then the parallel approach would of course be faster.

On a semi-related note, I've always wondered how SSDs are now being created. I had always assumed that they were using a similar type of parallel architecture to increase performance. Similar to RAID-0, only it's internal to the drive itself across multiple chips. Otherwise, I can't see how they increased throughput from what is usually quite pedestrian in flash memory to numbers that are now comparable to IDE and SATA real-world figures. Isn't this idea kind of old hat?


RE: Bandwidth?
By mahax on 1/17/2008 11:44:47 AM , Rating: 2
Yeah, this also resembles a crossbar type memory controller. So is it beneficial to segment the RAM on chip or by the memory controller? On chip does have the advantage that the RAM can be broken down to more smaller blocks. Crossbar has to do with the current hardware and is ultimately restricted by the width of the bus.


"Google fired a shot heard 'round the world, and now a second American company has answered the call to defend the rights of the Chinese people." -- Rep. Christopher H. Smith (R-N.J.)

Related Articles
Engineers Explain 45nm Delays, Errata
January 16, 2008, 10:32 AM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki