Print 112 comment(s) - last by MrPoletski.. on Jan 27 at 11:45 AM

Sandia simulations reveal memory is the bottleneck for some multi-core processors

Years ago, the hallmark of processor performance was clock speed. As chipmakers hit the wall on how far they could push clock speeds processor designs started to go to multiple cores to increase performance. However, as many users can tell you performance doesn't always increase the more cores you add to a system.

Benchmarkers know that a quad core processor often offers less performance than a similarly clocked dual-core processor for some uses. The reason for this phenomenon according to Sandia is one of memory availability. Supercomputers have tried to increase performance by moving to multiple core processors, just as the world of consumer processors has done.

The Sandia team has found that simply increasing the number of cores in a processor doesn't always improve performance, and at a point the performance actually decreases. Sandia simulations have shown that moving from dual core to four core processors offers a significant increase in performance. However, the team has found that moving from four cores to eight cores offers an insignificant performance gain. When you move from eight cores to 16 cores, the performance actually drops.

Sandia team members used simulations with algorithms for deriving knowledge form large data sets for their tests. The team found that when you moved to 16 cores the performance of the system was barely as good as the performance seen with dual-cores.

The problem according to the team is the lack of memory bandwidth along with fighting between the cores over the available memory bus of each processor. The team uses a supermarket analogy to better explain the problem. If two clerks check out your purchases, the process goes faster, add four clerks and things are even quicker.

However, if you add eight clerks or 16 clerks it becomes a problem to not only get your items to each clerk, but the clerks can get in each other's way leading to slower performance than using less clerks provides. Team member Arun Rodrigues said in a statement, "To some extent, it is pointing out the obvious — many of our applications have been memory-bandwidth-limited even on a single core. However, it is not an issue to which industry has a known solution, and the problem is often ignored."

James Peery, director of Sandia's Computations, Computers, Information, and Mathematics Center said, "The difficulty is contention among modules. The cores are all asking for memory through the same pipe. It's like having one, two, four, or eight people all talking to you at the same time, saying, 'I want this information.' Then they have to wait until the answer to their request comes back. This causes delays."

The researchers say that today there are memory systems available that offer dramatically improved memory performance over what was available a year ago, but the underlying fundamental memory problem remains.

Sandia and the ORNL are working together on a project that is intended to pave the way for exaflop supercomputing. The ORNL currently has the fastest supercomputer in the world, called the Jaguar, which was the first supercomputer to break the sustained petaflop barrier.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: Does this mean....?
By Shida on 1/17/2009 1:24:41 PM , Rating: 2
Sorry when I said "a waste" I meant as a colloquial for "a waste in spending on performance hardware" if you are going to be running linux to do basic stuff at a given time

RE: Does this mean....?
By nvalhalla on 1/17/2009 1:57:14 PM , Rating: 1
No, the article clearly states that 4 cores was much better than 2. Check out some of the game benchmarks of 2 vs 4 cores. In new games (farcry 2, warhead, ect. ) having more cores helps. Buy the fastest 4 core you can afford (I like the Q9400, it's coming down in price soon to $216)

RE: Does this mean....?
By mindless1 on 1/17/09, Rating: -1
RE: Does this mean....?
By SlyNine on 1/17/2009 3:52:28 PM , Rating: 2
5% is hardly significant when you are doubling the theoretical processing power.

I don't know how you can consider 5% significant. What are you smokeing, and can I get some.

RE: Does this mean....?
By mindless1 on 1/17/09, Rating: 0
RE: Does this mean....?
By SlyNine on 1/18/2009 1:55:51 AM , Rating: 2
Theirs lies, dam lies, and statistics.

5% is not significant on its own, only when paired with a bunch of other 5% increases does 5% become important. That's my OPINION, if 5% is significant to you then sure, but I have never heard it being generally considered significant.

But I can tell you that in many apps, Quad core over dual is 2x as fast and scales linear from 2 to 4. Transcodeing files and encoding H.264, I imagine games will sooner or later to.

RE: Does this mean....?
By garbageacc3 on 1/18/09, Rating: -1
RE: Does this mean....?
By masher2 on 1/18/2009 10:11:13 AM , Rating: 2
> "he already explained to you its statistical significance"

Unfortunately, he's wrong. The 5% level is considered the lowest level of statistical significance-- but that 5% is the level of "alpha", and has nothing to do with a percentage ratio between two different readings. Saying that a single value has increased by 5% and therefore is "statistically significant" is nonsensical.

RE: Does this mean....?
By mindless1 on 1/20/2009 12:28:16 AM , Rating: 2
Except when the other variables are fixed. We aren't trying to predict probability in this case, this is reproducible.

RE: Does this mean....?
By Hlafordlaes on 1/18/09, Rating: 0
RE: Does this mean....?
By SlyNine on 1/18/2009 11:37:29 PM , Rating: 2
Good for you. You get a star on your cheek.

"Well, we didn't have anyone in line that got shot waiting for our system." -- Nintendo of America Vice President Perrin Kaplan
Related Articles

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki