backtop


Print 73 comment(s) - last by Calin.. on Jul 27 at 4:05 AM

Native quad-core en route

Yesterday during AMD's Q2'06 earnings conference call, AMD's President and Chief Operating Officer Dirk Meyer recapped the long term plans for the company.  Although the bulk of his comments were already stated in during the June AMD Analyst's Day, Meyer also added the tidbit that the company plans "to demonstration our next-generation processor core, in a native quad-core implementation, before the end of the year."  Earlier this year, AMD's Executive Vice President Henri Richard claimed this native-quad core processor would be called K8L.

Earlier AMD roadmaps have revealed that quad-core production CPUs would not utilize a native quad-core design until late 2007 or 2008. To put that into perspective AMD demonstrated the first dual-core Opteron samples in August 2004, with the processor tape out in June 2004.  The official launch of dual-core Opteron occurred on April 21, 2005.  On the same call Meyer announced that that the native quad-core would launch in the middle of 2007 -- suggesting the non-native quad-core Deerhound designs may come earlier than expected or not at all.

Just this past Wednesday, Intel one-upped K8L plans by announcing quad-core Kentsfield and Clovertown will ship this year, as opposed to Q1'07 originally slated by the company. 


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: Why is everybody killing AMD????
By Viditor on 7/23/2006 12:31:35 AM , Rating: 2
quote:
most desktop applications will never be able to use dozens of cores at once. So long-term, single-threaded performance sitll needs to improve


If Intel and AMD have their way, this isn't quite true...
A good example of why not is Intel's "Mitosis" project which uses speculative threading...for AMD there are only rumours at this point, but I would be shocked if they didn't have their own on-the-fly parralellism of single threaded apps.


RE: Why is everybody killing AMD????
By Tyler 86 on 7/23/2006 5:48:59 AM , Rating: 2
I can parallelize a very very simple application to insane levels, and if I had a processor with enough cores to run each thread in parallel, it would speed up that simple application.

*Extremely* simple optimizations that I can make in code anywhere can be made in a compiler, and that is what Intel's aiming at.

Intel isn't pushing EPIC (IA64 arch, explicitly parallel) hard on developers anymore, and instead, working on implicit threading...

Although the EPIC architecture is fantastic, likewise IBM's POWER, and other natively parallelized processors, like nVidia's and ATi's GPUs, they presented a problem upon their conception; the original interfaces would reflect the low level assembly.

Now, abstraction has taken root, and we have the 'Pixel Shader 3.0' specification for GPUs, with optimizations applying to every minute function...

GNU/Linux has made huge progress in the area of abstraction, and it's reflected in it's application on embedded processors, mainframes, and supercomputers...

However, Linux' goals are quite a bit different than your average graphics centric developments... Low-level optimization is still left soley to the individual developers, and is in no way part of the abstraction.
GCC is good, but it could be much better, and that is obvious, due to it's frequent updates.
The GNU Std. C Lib. is less frequently updated.
Abstraction optimizations to the standard C libraries, and the compiler, are key to performance under advancing architectures. With the advent of GCC 4.0, an entirely new abstraction capabilities emerged.

The point is, with the progress of abstraction, even 'Hello World' applications will one day use 2 cores efficiently, with a measurable performance advantage over an optimized single core equivelant... and then 4 cores, and then 8 cores, and so forth...

Microsoft's attempt at abstraction lead them towards managed code, asynchronous streams, garbage collection, and JIT (just in time) compilation ... This resulted in the .NET framework you hear so much about.
Now even 'Managed DirectX' has emerged.

If you're interested in seeing JIT-less abstracted .NET code in action, check out Microsoft's Singularity project.
http://research.microsoft.com/os/singularity/

It has it's quirks, but because native code can be compiled in a 'trusted' manner, it's performance exceeds that of today's Windows 2003 IIS integrated web server.

Obviously, Singularity's not the best way to go if you're looking for gaming, but it could one day be, just as the Microsoft desktop segment migrated from Windows 9x/ME to Windows NT/2000/XP kernels...

Singularity takes abstraction to an entirely new level, making efficient use of any core, and any architectures, and any improvements to come.

Optimizations introduced at the bottom scale all the way up to the top 'Just In Time', and vice-versa - it grows 'Just In Time'.

It gives the perspective on operating systems a new, almost 'organic' approach.


RE: Why is everybody killing AMD????
By Tyler 86 on 7/23/2006 5:57:16 AM , Rating: 2
When I say 'JIT-less', I mean Singularity has a compiled assembly base, but everything ontop of it - 90% to 95% of the entire operating system, even at boot - is compiled either at or before runtime - core elements being pre-compiled, but compiled to assembly nonetheless, every time...

It's effectively taking the JIT out of a JIT compiler...

... but I guess that's nothing really new, so nevermind.

Sure. It's all JIT.


RE: Why is everybody killing AMD????
By Tyler 86 on 7/23/2006 6:03:28 AM , Rating: 2
quote:
Advances in languages, compilers, and tools open the possibility of significantly improving software. For example, Singularity uses type-safe languages and an abstract instruction set to enable what we call Software Isolated Processes (SIPs). SIPs provide the strong isolation guarantees of OS processes (isolated object space, separate GCs, separate runtimes) without the overhead of hardware-enforced protection domains. In the current Singularity prototype SIPs are extremely cheap; they run in ring 0 in the kernel’s address space.


You're able to effectively run everything you 'trust' at kernel level ring 0 code. That is as close to the processor as you can get. It boggles the average software developer's mind.


RE: Why is everybody killing AMD????
By Tyler 86 on 7/23/2006 6:13:23 AM , Rating: 2
quote:
Singularity achieves good performance by reinventing the environment in which code executes. In existing systems, safe code is an exotic newcomer who lives in a huge, luxurious home in an elegant, gated community with its own collection of services. Singularity, in contrast, has architected a single world in which everyone can be safe, with performance comparable to the unsafe world of existing systems.


quote:
A key starting point is Singularity processes, which start empty and add features only as required. Modern language runtimes come with huge libraries and expressive, dynamic language features such as reflection. This richness comes at a price. Features such as code access security or reflection incur massive overhead, even when never used.


quote:
A Singularity application specifies which libraries it needs, and the Bartok compiler brings together the code and eliminates unneeded functionality through a process called "tree shaking," which deletes unused classes, methods, and even fields. As a result, a simple C# "Hello World" process in Singularity requires less memory than the equivalent C/C++ program running on most UNIX or Windows® systems. Moreover, Bartok translates from Microsoft® intermediate language (MSIL) into highly optimized x86 code. It performs interprocedural optimization to eliminate redundant run-time safety tests, reducing the cost of language safety.


Because the code is an abstract recompilable element, and the intention and boundries are clearly visible to the compiler, they can run natively at ring 0 at full throttle, after their initial compilation.

If you ever played with .NET, you know just how fancy, abstract, intricate, and easy C# is -- and you probably also know how painfully slow it can be due to it's 'management'. Even with 'unsafe' tags, things are hairy compared to C++ and native assemblies.
Singularity is effectively nativized C#.


RE: Why is everybody killing AMD????
By Tyler 86 on 7/23/2006 6:17:33 AM , Rating: 2
This is only a glimpse of the future.
Fortunately, for desktops; unmanaged untrusted abstractable JIT code can exist - just not in Singularity, which is intended as a server operating system.

"Yesterday's" JIT applications immediately recieve the benefits of "today's" processors. Cores, instructions, architectures. It's all in the upgradable compiler, and the upgradable libraries referenced.


By Tyler 86 on 7/23/2006 6:21:42 AM , Rating: 2
quote:
Aggressive interprocedural optimization is possible because Singularity processes are closed—they do not permit code loading after the process starts executing. This is a dramatic change, since dynamic code loading is a popular, but problematic, mechanism for loading plug-ins. Giving plug-ins access to a program's internals presents serious security and reliability problems [snip]... Dynamic loading frustrates program analysis in compilers or defect-detection tools, which can't see all code that might execute. To be safe, the analysis must be conservative, which precludes many optimizations and dulls the accuracy of defect detection.


http://msdn.microsoft.com/msdnmag/issues/06/06/End...


RE: Why is everybody killing AMD????
By masher2 (blog) on 7/23/2006 10:30:12 AM , Rating: 2
> "A good example of why not is Intel's "Mitosis" project which uses speculative threading..."

"Never" is admittedly too strong a word for any tech subject. I'll substitute "not within the next 25 years" instead.

As for Mitosis, remember that its still very far down the horizon, as it requires hardware support that isn't in existence yet. Furthermore, the amount of parallelism that can be extracted via Mitosis is rather limited. Diminishing returns sets in hard on anything over four cores.


By Viditor on 7/23/2006 7:56:07 PM , Rating: 2
quote:
As for Mitosis, remember that its still very far down the horizon, as it requires hardware support that isn't in existence yet

Actually, the hardware can be any multicore system (with some very minor tweaks)...it's really only the compiler that isn't ready yet.
In reality, Mitosis could be out by the end of next year, or it could be many years...it will depend on the compiler team.


RE: Why is everybody killing AMD????
By Viditor on 7/23/2006 8:12:44 PM , Rating: 2
quote:
Furthermore, the amount of parallelism that can be extracted via Mitosis is rather limited. Diminishing returns sets in hard on anything over four cores

Ummm...how could you possibly know about diminishing returns on a system that isn't built yet? In addition, from what I've read on the theory, the more cores you have the BETTER your returns...


RE: Why is everybody killing AMD????
By masher2 (blog) on 7/23/2006 10:45:10 PM , Rating: 2
> "Ummm...how could you possibly know about diminishing returns on a system that isn't built yet? "

From the same way one knows about the performance of any processor before it's built-- software simulation.

> "the more cores you have the BETTER your returns... "

You don't understand what's meant by diminishing returns. If you add cores, your performance rises...but by a ever-diminishing amount.

The Intel sims showed Mitosis achieving about a 2.5X speedup on a 4X system, with slightly more than half of that gain due simply to the side-effect of the other cores increasing the cache hits of the primary, from prerequesting data. That's pretty good scaling, but at 8 cores, the results are less impressive---about a 3.5X speedup. I didn't see any 16-core sims, but with that type of curve, it'd probably work out to just under 4X...which means you're only achieving 25% theoretical efficiency.




RE: Why is everybody killing AMD????
By Viditor on 7/24/2006 4:15:36 AM , Rating: 2
quote:
From the same way one knows about the performance of any processor before it's built-- software simulation

How can you do a software simulation when the Mitosis Compiler is nowhere near finished? Mitosis is predominantly a software driven enhancement...
quote:
at 8 cores, the results are less impressive---about a 3.5X speedup

Well, if you're looking at the same data I am (and it sounds like you are), then it's based on an early version of the Mitosis Compiler (Alpha version) from 2005...
Remember that they are still in the "proof of concept" phase for Mitosis, so you shouldn't expect it to look anything like the final product.


By masher2 (blog) on 7/24/2006 10:32:25 AM , Rating: 3
> "How can you do a software simulation when the Mitosis Compiler is nowhere near finished? Mitosis is predominantly a software driven enhancement... "

Software that requires hardware support. Mitosis won't run on current hardware. As for how you sim it, this research paper has the details:

http://portal.acm.org/citation.cfm?doid=1065010.10...

> " it's based on an early version of the Mitosis Compiler (Alpha version) from 2005..."

Not even an "alpha" version...just a research proof of concept. But the point is their simulations show a definite ceiling for the performance benefits of speculative threading. Of course, you could always postulate a breakthrough in basic theory-- but given what we know today, Mitosis isn't going to utilize more than 4-8 cores for a single-threaded process.




By bfonnes on 7/23/2006 11:59:16 PM , Rating: 2
And when they come out with a chip that can read your mind and know what you want to do before you do it, then it will be even faster, lol...


"If you mod me down, I will become more insightful than you can possibly imagine." -- Slashdot

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki