Print 37 comment(s) - last by Spivonious.. on Jan 25 at 9:36 AM

Ashwood memory architecture allows for much faster memory speeds

Chipmakers realized long ago that extracting more performance from computer processors could be accomplished in ways other than simply reducing the size of the manufacturing process to squeeze more transistors onto a die.

One of the ways chipmakers improved performance was by building multi-core CPUs, like Intel's Penryn processors, that allow for parallel execution of data. Memory chips haven’t been able to keep up with the performance increases we are seeing in processors making for a bottleneck in the performance of computer systems and other devices.

In order to tackle this problem, a cryptographer named Joseph Ashwood has developed a new memory architecture that allows for multi-core memory.

Ashwood dubbed his memory architecture the Ashwood Architecture. According to EETimes the Ashwood architecture integrates smart controller circuitry next to the memory array on a single chip. This provides parallel access to the memory array for hundreds of concurrent processes leading to increased throughput and lower average access times.

Ashwood says, “My design borrows extensively from today's modern multicore CPUs. As far as concurrency goes, my memory architecture shares some features with Fibre Channel.”

Ashwood says his architecture can hit 16Gbytes per second compared to the DDR2 limit of 12 Gbytes per second. The hallmark of the Ashwood architecture is that the larger the number of bit cells in the memory the better the performance.

Ashwood does admit to a couple downsides to his design. The first is that his design is paper only, though it was independently verified by researchers from Carnegie Mellon University. No design was tested of the architecture at the electrical signal level.

The second drawback is that parallel access overhead of the architecture slows down access time to individual memory cells. However, Ashwood says that the parallel nature of his architecture more than makes up for any slowdowns by executing more commands at the same time.

Ashwood has filed a patent on his architecture that is still pending; until the patent is granted the intricate details of his architecture remain unknown.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: A sign of the times...
By PB PM on 1/17/2008 12:34:08 PM , Rating: 2
True, but if you want to be picky, the IBM Power 5 did that first (almost a year ahead of Dual Core K8s, at least in the server space).

In any case the idea of dual core memory could be just as useful as dual core CPUs have proven to be. One thing I don't understand is that speed of communication between the CPU and memory has not kept up with the speed of the CPU. Honestly though, isn't the biggest bottleneck on modern PCs the HDD?

RE: A sign of the times...
By murphyslabrat on 1/17/2008 1:20:13 PM , Rating: 2
Yes, and that has the potential of changing now, what with SSD's and all.

However, I think it's all a temporary fix, and that at best it will stick around for 10 years. With quantum computing now entering infancy in the market, as opposed to its former place in pure speculation and experimenting, it's only a matter of time before our punch-card analogous electrical circuitry is replaced by the future.

Maybe this kind of thing will assist with photon-signal processing, but I don't see it being useful beyond that.

RE: A sign of the times...
By masher2 on 1/17/2008 2:08:15 PM , Rating: 2
It's going to be quite a bit longer than 10 years before quantum computing enters the mainstream, if ever. We don't even have stochastic versions of most major algorithms, making it impossible at present to even write mainstream software for a quantum machine, even were the hardware available.

RE: A sign of the times...
By PandaBear on 1/17/2008 4:01:31 PM , Rating: 2
Memories are already multi core. The only reason they keep them separate is yield and cost.

RE: A sign of the times...
By Sulphademus on 1/18/2008 8:50:41 AM , Rating: 2
One thing I don't understand is that speed of communication between the CPU and memory has not kept up with the speed of the CPU. Honestly though, isn't the biggest bottleneck on modern PCs the HDD?

While not perfect, wasn't the onchip memory controller and hypertransport link of the Athlon 64/Phenom supposed to help here specifically?

As to the hard drive, yeah, they're still ALOT slower than RAM, though I have noticed that Vista does like to preload of an awful lot of stuff into memory. (Seems to target 50% utilization.) So if the OS starts making memory predictions, much like CPUs have done for so long, and pulling this data over to RAM during downtime, this could speed up things some. Or maybe just enough to cover how much heavier and OS Vista is?

"If you look at the last five years, if you look at what major innovations have occurred in computing technology, every single one of them came from AMD. Not a single innovation came from Intel." -- AMD CEO Hector Ruiz in 2007
Related Articles
Engineers Explain 45nm Delays, Errata
January 16, 2008, 10:32 AM

Most Popular ArticlesSmartphone Screen Protectors – What To Look For
September 21, 2016, 9:33 AM
UN Meeting to Tackle Antimicrobial Resistance
September 21, 2016, 9:52 AM
Walmart may get "Robot Shopping Carts?"
September 17, 2016, 6:01 AM
5 Cases for iPhone 7 and 7 iPhone Plus
September 18, 2016, 10:08 AM
Update: Problem-Free Galaxy Note7s CPSC Approved
September 22, 2016, 5:30 AM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki