backtop


Print 37 comment(s) - last by Spivonious.. on Jan 25 at 9:36 AM

Ashwood memory architecture allows for much faster memory speeds

Chipmakers realized long ago that extracting more performance from computer processors could be accomplished in ways other than simply reducing the size of the manufacturing process to squeeze more transistors onto a die.

One of the ways chipmakers improved performance was by building multi-core CPUs, like Intel's Penryn processors, that allow for parallel execution of data. Memory chips haven’t been able to keep up with the performance increases we are seeing in processors making for a bottleneck in the performance of computer systems and other devices.

In order to tackle this problem, a cryptographer named Joseph Ashwood has developed a new memory architecture that allows for multi-core memory.

Ashwood dubbed his memory architecture the Ashwood Architecture. According to EETimes the Ashwood architecture integrates smart controller circuitry next to the memory array on a single chip. This provides parallel access to the memory array for hundreds of concurrent processes leading to increased throughput and lower average access times.

Ashwood says, “My design borrows extensively from today's modern multicore CPUs. As far as concurrency goes, my memory architecture shares some features with Fibre Channel.”

Ashwood says his architecture can hit 16Gbytes per second compared to the DDR2 limit of 12 Gbytes per second. The hallmark of the Ashwood architecture is that the larger the number of bit cells in the memory the better the performance.

Ashwood does admit to a couple downsides to his design. The first is that his design is paper only, though it was independently verified by researchers from Carnegie Mellon University. No design was tested of the architecture at the electrical signal level.

The second drawback is that parallel access overhead of the architecture slows down access time to individual memory cells. However, Ashwood says that the parallel nature of his architecture more than makes up for any slowdowns by executing more commands at the same time.

Ashwood has filed a patent on his architecture that is still pending; until the patent is granted the intricate details of his architecture remain unknown.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: Bandwidth?
By geddarkstorm on 1/17/2008 12:27:26 PM , Rating: 4
From what I see, you are both right and wrong--in that you are looking at it from different perspectives and ignoring each other.

For instance, Masher is right in that for a single unit to do a single operation, the latency hasn't decreased with this method (but may increase a little for synchronization with other units). However Ninjit is also right in that BULK latency, from our perspective, HAS decreased, and substantially.

Going back to your example Masher, if 10 people do 10 calls each, for any one person it takes the same time to individually do 10 calls as it would one person doing 10 calls. However, in bulk you've done 100 calls (10 people times 10 calls) in the space of time it takes only to do 10 calls. So then, if you have 10 parallel threads, you've increased your speed of data flow times 10. Each individual unit still has the same latency, but because there's so many going on at once, the perceived bulk latency to do 100 operations has now been vastly reduced. That's the beauty of parallel.


RE: Bandwidth?
By masher2 (blog) on 1/17/2008 1:30:38 PM , Rating: 3
You're confusing latency with bandwidth. They're two different:

http://en.wikipedia.org/wiki/Latency_(engineering)

What you call "perceived bulk latency" is bandwidth. In general, parallel operation increases that in a linear function of the number of units.

Latency, however, is not decreased by parallel operation.


RE: Bandwidth?
By Ringold on 1/17/2008 8:27:32 PM , Rating: 3
"Latency (Engineering)"

Interesting.. field-specific definition entries.

I think I should add a "Latency (Economics/Applied Mathematics)" wiki; The delay from the time one looks at an equation with a blank stare and the time that person leaves to get coffee.


RE: Bandwidth?
By Sulphademus on 1/18/2008 8:58:31 AM , Rating: 2
To make another analogy:

Lets say you are delivering packages.

Latency is the speed limit you can drive.

Bandwidth is whether you're driving a pickup or a semi.


RE: Bandwidth?
By Spivonious on 1/25/2008 9:36:01 AM , Rating: 2
Ah, but what if you have 10 pickups all going to different places? Even if your semi can hold 10 pickups worth of packages, you still can't deliver them faster than the 10 pickups could.


RE: Bandwidth?
By Gentleman on 1/17/2008 7:06:09 PM , Rating: 2
Parallel bus has lower setup time compare to serial bus, but serial bus has significantly higher bandwidth because it is not affected by clock skew.

It sounds like this guy developed a new controller method that allows multiple simutaneous access to memory. This has higher setup time (latency), and higher bandwidth. This could potentially reduce the performance of single process but would increase the performance of multiprocesses. I would imagine that this new memory architecture would require a new bus as well.


"When an individual makes a copy of a song for himself, I suppose we can say he stole a song." -- Sony BMG attorney Jennifer Pariser

Related Articles
Engineers Explain 45nm Delays, Errata
January 16, 2008, 10:32 AM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki