Print 73 comment(s) - last by Calin.. on Jul 27 at 4:05 AM

Native quad-core en route

Yesterday during AMD's Q2'06 earnings conference call, AMD's President and Chief Operating Officer Dirk Meyer recapped the long term plans for the company.  Although the bulk of his comments were already stated in during the June AMD Analyst's Day, Meyer also added the tidbit that the company plans "to demonstration our next-generation processor core, in a native quad-core implementation, before the end of the year."  Earlier this year, AMD's Executive Vice President Henri Richard claimed this native-quad core processor would be called K8L.

Earlier AMD roadmaps have revealed that quad-core production CPUs would not utilize a native quad-core design until late 2007 or 2008. To put that into perspective AMD demonstrated the first dual-core Opteron samples in August 2004, with the processor tape out in June 2004.  The official launch of dual-core Opteron occurred on April 21, 2005.  On the same call Meyer announced that that the native quad-core would launch in the middle of 2007 -- suggesting the non-native quad-core Deerhound designs may come earlier than expected or not at all.

Just this past Wednesday, Intel one-upped K8L plans by announcing quad-core Kentsfield and Clovertown will ship this year, as opposed to Q1'07 originally slated by the company. 

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: Why is everybody killing AMD????
By Tyler 86 on 7/23/2006 5:48:59 AM , Rating: 2
I can parallelize a very very simple application to insane levels, and if I had a processor with enough cores to run each thread in parallel, it would speed up that simple application.

*Extremely* simple optimizations that I can make in code anywhere can be made in a compiler, and that is what Intel's aiming at.

Intel isn't pushing EPIC (IA64 arch, explicitly parallel) hard on developers anymore, and instead, working on implicit threading...

Although the EPIC architecture is fantastic, likewise IBM's POWER, and other natively parallelized processors, like nVidia's and ATi's GPUs, they presented a problem upon their conception; the original interfaces would reflect the low level assembly.

Now, abstraction has taken root, and we have the 'Pixel Shader 3.0' specification for GPUs, with optimizations applying to every minute function...

GNU/Linux has made huge progress in the area of abstraction, and it's reflected in it's application on embedded processors, mainframes, and supercomputers...

However, Linux' goals are quite a bit different than your average graphics centric developments... Low-level optimization is still left soley to the individual developers, and is in no way part of the abstraction.
GCC is good, but it could be much better, and that is obvious, due to it's frequent updates.
The GNU Std. C Lib. is less frequently updated.
Abstraction optimizations to the standard C libraries, and the compiler, are key to performance under advancing architectures. With the advent of GCC 4.0, an entirely new abstraction capabilities emerged.

The point is, with the progress of abstraction, even 'Hello World' applications will one day use 2 cores efficiently, with a measurable performance advantage over an optimized single core equivelant... and then 4 cores, and then 8 cores, and so forth...

Microsoft's attempt at abstraction lead them towards managed code, asynchronous streams, garbage collection, and JIT (just in time) compilation ... This resulted in the .NET framework you hear so much about.
Now even 'Managed DirectX' has emerged.

If you're interested in seeing JIT-less abstracted .NET code in action, check out Microsoft's Singularity project.

It has it's quirks, but because native code can be compiled in a 'trusted' manner, it's performance exceeds that of today's Windows 2003 IIS integrated web server.

Obviously, Singularity's not the best way to go if you're looking for gaming, but it could one day be, just as the Microsoft desktop segment migrated from Windows 9x/ME to Windows NT/2000/XP kernels...

Singularity takes abstraction to an entirely new level, making efficient use of any core, and any architectures, and any improvements to come.

Optimizations introduced at the bottom scale all the way up to the top 'Just In Time', and vice-versa - it grows 'Just In Time'.

It gives the perspective on operating systems a new, almost 'organic' approach.

RE: Why is everybody killing AMD????
By Tyler 86 on 7/23/2006 5:57:16 AM , Rating: 2
When I say 'JIT-less', I mean Singularity has a compiled assembly base, but everything ontop of it - 90% to 95% of the entire operating system, even at boot - is compiled either at or before runtime - core elements being pre-compiled, but compiled to assembly nonetheless, every time...

It's effectively taking the JIT out of a JIT compiler...

... but I guess that's nothing really new, so nevermind.

Sure. It's all JIT.

RE: Why is everybody killing AMD????
By Tyler 86 on 7/23/2006 6:03:28 AM , Rating: 2
Advances in languages, compilers, and tools open the possibility of significantly improving software. For example, Singularity uses type-safe languages and an abstract instruction set to enable what we call Software Isolated Processes (SIPs). SIPs provide the strong isolation guarantees of OS processes (isolated object space, separate GCs, separate runtimes) without the overhead of hardware-enforced protection domains. In the current Singularity prototype SIPs are extremely cheap; they run in ring 0 in the kernel’s address space.

You're able to effectively run everything you 'trust' at kernel level ring 0 code. That is as close to the processor as you can get. It boggles the average software developer's mind.

RE: Why is everybody killing AMD????
By Tyler 86 on 7/23/2006 6:13:23 AM , Rating: 2
Singularity achieves good performance by reinventing the environment in which code executes. In existing systems, safe code is an exotic newcomer who lives in a huge, luxurious home in an elegant, gated community with its own collection of services. Singularity, in contrast, has architected a single world in which everyone can be safe, with performance comparable to the unsafe world of existing systems.

A key starting point is Singularity processes, which start empty and add features only as required. Modern language runtimes come with huge libraries and expressive, dynamic language features such as reflection. This richness comes at a price. Features such as code access security or reflection incur massive overhead, even when never used.

A Singularity application specifies which libraries it needs, and the Bartok compiler brings together the code and eliminates unneeded functionality through a process called "tree shaking," which deletes unused classes, methods, and even fields. As a result, a simple C# "Hello World" process in Singularity requires less memory than the equivalent C/C++ program running on most UNIX or Windows® systems. Moreover, Bartok translates from Microsoft® intermediate language (MSIL) into highly optimized x86 code. It performs interprocedural optimization to eliminate redundant run-time safety tests, reducing the cost of language safety.

Because the code is an abstract recompilable element, and the intention and boundries are clearly visible to the compiler, they can run natively at ring 0 at full throttle, after their initial compilation.

If you ever played with .NET, you know just how fancy, abstract, intricate, and easy C# is -- and you probably also know how painfully slow it can be due to it's 'management'. Even with 'unsafe' tags, things are hairy compared to C++ and native assemblies.
Singularity is effectively nativized C#.

RE: Why is everybody killing AMD????
By Tyler 86 on 7/23/2006 6:17:33 AM , Rating: 2
This is only a glimpse of the future.
Fortunately, for desktops; unmanaged untrusted abstractable JIT code can exist - just not in Singularity, which is intended as a server operating system.

"Yesterday's" JIT applications immediately recieve the benefits of "today's" processors. Cores, instructions, architectures. It's all in the upgradable compiler, and the upgradable libraries referenced.

By Tyler 86 on 7/23/2006 6:21:42 AM , Rating: 2
Aggressive interprocedural optimization is possible because Singularity processes are closed—they do not permit code loading after the process starts executing. This is a dramatic change, since dynamic code loading is a popular, but problematic, mechanism for loading plug-ins. Giving plug-ins access to a program's internals presents serious security and reliability problems [snip]... Dynamic loading frustrates program analysis in compilers or defect-detection tools, which can't see all code that might execute. To be safe, the analysis must be conservative, which precludes many optimizations and dulls the accuracy of defect detection.

"Nowadays you can buy a CPU cheaper than the CPU fan." -- Unnamed AMD executive
Related Articles

Most Popular Articles5 Cases for iPhone 7 and 7 iPhone Plus
September 18, 2016, 10:08 AM
No More Turtlenecks - Try Snakables
September 19, 2016, 7:44 AM
ADHD Diagnosis and Treatment in Children: Problem or Paranoia?
September 19, 2016, 5:30 AM
Walmart may get "Robot Shopping Carts?"
September 17, 2016, 6:01 AM
Automaker Porsche may expand range of Panamera Coupe design.
September 18, 2016, 11:00 AM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki