backtop


Print 73 comment(s) - last by Calin.. on Jul 27 at 4:05 AM

Native quad-core en route

Yesterday during AMD's Q2'06 earnings conference call, AMD's President and Chief Operating Officer Dirk Meyer recapped the long term plans for the company.  Although the bulk of his comments were already stated in during the June AMD Analyst's Day, Meyer also added the tidbit that the company plans "to demonstration our next-generation processor core, in a native quad-core implementation, before the end of the year."  Earlier this year, AMD's Executive Vice President Henri Richard claimed this native-quad core processor would be called K8L.

Earlier AMD roadmaps have revealed that quad-core production CPUs would not utilize a native quad-core design until late 2007 or 2008. To put that into perspective AMD demonstrated the first dual-core Opteron samples in August 2004, with the processor tape out in June 2004.  The official launch of dual-core Opteron occurred on April 21, 2005.  On the same call Meyer announced that that the native quad-core would launch in the middle of 2007 -- suggesting the non-native quad-core Deerhound designs may come earlier than expected or not at all.

Just this past Wednesday, Intel one-upped K8L plans by announcing quad-core Kentsfield and Clovertown will ship this year, as opposed to Q1'07 originally slated by the company. 


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Why is everybody killing AMD????
By FirstoneVorlon on 7/22/2006 12:08:11 AM , Rating: 4
So, why is that now, after about 3 years AMD'64 architecture destroyed the Pentium IV with a design which is technically still superior to Intel's design, that everybody wishes to kill AMD??? AMD did something to the market Intel was not doing for AGES!!!!! They reinvented the mainstream microprocessor market, they created a piece of engineering artwork. AMD 64 design is elegant and much more intelligent than Intel's one. Of course, Intel's latest seems to be the best thing ever existed, which I don't believe it is and AMD will soon prove again to us it has a better design. Intel had to resort to TRICKS all the time to make it's processors competitive. SSE3, Hyperthreading, gigantic caches, not to mention a lot of cheating in compilers, tests and applications. I won't even mention Intel's business practices, dishonest is the least I can say about them. AMD64 would have crushed Intel's processors long ago if it wasn't for the people they paid not to sell AMD products, so, I don't want to buy stuff from a company that resorts to those methods instead of moving that money into engineering and developing capable products. The Pentium IV was a flop from start, Intel's new processor only corrects that. Where's Hypertransport??? where's the on-die memory controller and so forth??? I see a HUGE cache, now, let's move to applications that don't use cache very well. Intel took 3 years to play catch up with AMD, but AMD did not sit around, their timing is still a little behind for many reasons, but they will in less than a year demonstrate the K8L, which will surely kick intel's butt once again. The design is still superior, AMD has given us TRUE DUAL CORE and will give us TRUE QUAD CORE, not "GLUED" processors as Intel. Stop killing AMD and respect true engineering. If you have doubts about the best processor, go look at Cray's supercomputers, you won't find intels there.




RE: Why is everybody killing AMD????
By smitty3268 on 7/22/2006 12:35:00 AM , Rating: 2
It was the same after NVIDIA released the GTX7800. Everyone was predicting that ATI would go bankrupt within a month and that they would never be able to catch up, blah, blah, blah. Well, it turns out they did, and even recaptured the lead. AMD will be fine while they continue working on their next chips. It's not like this is an unusual position for them to be in - they're probably work better as an underdog.


RE: Why is everybody killing AMD????
By dagamer34 on 7/22/2006 1:04:30 AM , Rating: 2
Only problem is that ATi and nVidia's product cycles change once every 6 months compared to Intel and AMD's cycle of 1-2 years.

Oh, and there's this thing called planning for the future. Graphics companies don't have to do that as they just have to put more transistors in a smaller space, and voila, you got a bettter GPU!


RE: Why is everybody killing AMD????
By smitty3268 on 7/22/2006 1:19:22 AM , Rating: 2
quote:
Oh, and there's this thing called planning for the future. Graphics companies don't have to do that as they just have to put more transistors in a smaller space, and voila, you got a bettter GPU!


And how is that any different than on a CPU? I'd say graphics companies have to plan for the future even better because of how much the hardware changes - your basic CPU is still pretty similar to one from several years ago, just with extra cache, better speed, and lots of tricks to get rid of bottlenecks.

quote:
Only problem is that ATi and nVidia's product cycles change once every 6 months compared to Intel and AMD's cycle of 1-2 years.


True, but AMD has a lot more money and resources as well.


RE: Why is everybody killing AMD????
By masher2 (blog) on 7/22/2006 1:36:00 PM , Rating: 3
quote:
> "companies don't have to do that as they just have to put more transistors in a smaller space, and voila, you got a bettter GPU! "

...And how is that any different than on a CPU? "


Because a GPU runs an innately parallelizable process. Double the transistors on a GPU, and you double the performance. Maybe more, if you make other improvements...but the important issue is that, as long as there are less pipelines in a GPU than pixels on a screen, there's still a lot of scaling left.

A CPU is different. Single-core, the scaling situation was horrible....which explains monsters like Intel's EE and AMD's FX. Massive increases in transistor counts...but the only place to put them was extra cache, which translated only into minor performance increases.

Multicore chips give CPUs some scaling breathing room. But still, single-threaded performance is important, and most desktop applications will never be able to use dozens of cores at once. So long-term, single-threaded performance sitll needs to improve.


RE: Why is everybody killing AMD????
By Viditor on 7/23/2006 12:31:35 AM , Rating: 2
quote:
most desktop applications will never be able to use dozens of cores at once. So long-term, single-threaded performance sitll needs to improve


If Intel and AMD have their way, this isn't quite true...
A good example of why not is Intel's "Mitosis" project which uses speculative threading...for AMD there are only rumours at this point, but I would be shocked if they didn't have their own on-the-fly parralellism of single threaded apps.


RE: Why is everybody killing AMD????
By Tyler 86 on 7/23/2006 5:48:59 AM , Rating: 2
I can parallelize a very very simple application to insane levels, and if I had a processor with enough cores to run each thread in parallel, it would speed up that simple application.

*Extremely* simple optimizations that I can make in code anywhere can be made in a compiler, and that is what Intel's aiming at.

Intel isn't pushing EPIC (IA64 arch, explicitly parallel) hard on developers anymore, and instead, working on implicit threading...

Although the EPIC architecture is fantastic, likewise IBM's POWER, and other natively parallelized processors, like nVidia's and ATi's GPUs, they presented a problem upon their conception; the original interfaces would reflect the low level assembly.

Now, abstraction has taken root, and we have the 'Pixel Shader 3.0' specification for GPUs, with optimizations applying to every minute function...

GNU/Linux has made huge progress in the area of abstraction, and it's reflected in it's application on embedded processors, mainframes, and supercomputers...

However, Linux' goals are quite a bit different than your average graphics centric developments... Low-level optimization is still left soley to the individual developers, and is in no way part of the abstraction.
GCC is good, but it could be much better, and that is obvious, due to it's frequent updates.
The GNU Std. C Lib. is less frequently updated.
Abstraction optimizations to the standard C libraries, and the compiler, are key to performance under advancing architectures. With the advent of GCC 4.0, an entirely new abstraction capabilities emerged.

The point is, with the progress of abstraction, even 'Hello World' applications will one day use 2 cores efficiently, with a measurable performance advantage over an optimized single core equivelant... and then 4 cores, and then 8 cores, and so forth...

Microsoft's attempt at abstraction lead them towards managed code, asynchronous streams, garbage collection, and JIT (just in time) compilation ... This resulted in the .NET framework you hear so much about.
Now even 'Managed DirectX' has emerged.

If you're interested in seeing JIT-less abstracted .NET code in action, check out Microsoft's Singularity project.
http://research.microsoft.com/os/singularity/

It has it's quirks, but because native code can be compiled in a 'trusted' manner, it's performance exceeds that of today's Windows 2003 IIS integrated web server.

Obviously, Singularity's not the best way to go if you're looking for gaming, but it could one day be, just as the Microsoft desktop segment migrated from Windows 9x/ME to Windows NT/2000/XP kernels...

Singularity takes abstraction to an entirely new level, making efficient use of any core, and any architectures, and any improvements to come.

Optimizations introduced at the bottom scale all the way up to the top 'Just In Time', and vice-versa - it grows 'Just In Time'.

It gives the perspective on operating systems a new, almost 'organic' approach.


RE: Why is everybody killing AMD????
By Tyler 86 on 7/23/2006 5:57:16 AM , Rating: 2
When I say 'JIT-less', I mean Singularity has a compiled assembly base, but everything ontop of it - 90% to 95% of the entire operating system, even at boot - is compiled either at or before runtime - core elements being pre-compiled, but compiled to assembly nonetheless, every time...

It's effectively taking the JIT out of a JIT compiler...

... but I guess that's nothing really new, so nevermind.

Sure. It's all JIT.


RE: Why is everybody killing AMD????
By Tyler 86 on 7/23/2006 6:03:28 AM , Rating: 2
quote:
Advances in languages, compilers, and tools open the possibility of significantly improving software. For example, Singularity uses type-safe languages and an abstract instruction set to enable what we call Software Isolated Processes (SIPs). SIPs provide the strong isolation guarantees of OS processes (isolated object space, separate GCs, separate runtimes) without the overhead of hardware-enforced protection domains. In the current Singularity prototype SIPs are extremely cheap; they run in ring 0 in the kernel’s address space.


You're able to effectively run everything you 'trust' at kernel level ring 0 code. That is as close to the processor as you can get. It boggles the average software developer's mind.


RE: Why is everybody killing AMD????
By Tyler 86 on 7/23/2006 6:13:23 AM , Rating: 2
quote:
Singularity achieves good performance by reinventing the environment in which code executes. In existing systems, safe code is an exotic newcomer who lives in a huge, luxurious home in an elegant, gated community with its own collection of services. Singularity, in contrast, has architected a single world in which everyone can be safe, with performance comparable to the unsafe world of existing systems.


quote:
A key starting point is Singularity processes, which start empty and add features only as required. Modern language runtimes come with huge libraries and expressive, dynamic language features such as reflection. This richness comes at a price. Features such as code access security or reflection incur massive overhead, even when never used.


quote:
A Singularity application specifies which libraries it needs, and the Bartok compiler brings together the code and eliminates unneeded functionality through a process called "tree shaking," which deletes unused classes, methods, and even fields. As a result, a simple C# "Hello World" process in Singularity requires less memory than the equivalent C/C++ program running on most UNIX or Windows® systems. Moreover, Bartok translates from Microsoft® intermediate language (MSIL) into highly optimized x86 code. It performs interprocedural optimization to eliminate redundant run-time safety tests, reducing the cost of language safety.


Because the code is an abstract recompilable element, and the intention and boundries are clearly visible to the compiler, they can run natively at ring 0 at full throttle, after their initial compilation.

If you ever played with .NET, you know just how fancy, abstract, intricate, and easy C# is -- and you probably also know how painfully slow it can be due to it's 'management'. Even with 'unsafe' tags, things are hairy compared to C++ and native assemblies.
Singularity is effectively nativized C#.


RE: Why is everybody killing AMD????
By Tyler 86 on 7/23/2006 6:17:33 AM , Rating: 2
This is only a glimpse of the future.
Fortunately, for desktops; unmanaged untrusted abstractable JIT code can exist - just not in Singularity, which is intended as a server operating system.

"Yesterday's" JIT applications immediately recieve the benefits of "today's" processors. Cores, instructions, architectures. It's all in the upgradable compiler, and the upgradable libraries referenced.


By Tyler 86 on 7/23/2006 6:21:42 AM , Rating: 2
quote:
Aggressive interprocedural optimization is possible because Singularity processes are closed—they do not permit code loading after the process starts executing. This is a dramatic change, since dynamic code loading is a popular, but problematic, mechanism for loading plug-ins. Giving plug-ins access to a program's internals presents serious security and reliability problems [snip]... Dynamic loading frustrates program analysis in compilers or defect-detection tools, which can't see all code that might execute. To be safe, the analysis must be conservative, which precludes many optimizations and dulls the accuracy of defect detection.


http://msdn.microsoft.com/msdnmag/issues/06/06/End...


RE: Why is everybody killing AMD????
By masher2 (blog) on 7/23/2006 10:30:12 AM , Rating: 2
> "A good example of why not is Intel's "Mitosis" project which uses speculative threading..."

"Never" is admittedly too strong a word for any tech subject. I'll substitute "not within the next 25 years" instead.

As for Mitosis, remember that its still very far down the horizon, as it requires hardware support that isn't in existence yet. Furthermore, the amount of parallelism that can be extracted via Mitosis is rather limited. Diminishing returns sets in hard on anything over four cores.


By Viditor on 7/23/2006 7:56:07 PM , Rating: 2
quote:
As for Mitosis, remember that its still very far down the horizon, as it requires hardware support that isn't in existence yet

Actually, the hardware can be any multicore system (with some very minor tweaks)...it's really only the compiler that isn't ready yet.
In reality, Mitosis could be out by the end of next year, or it could be many years...it will depend on the compiler team.


RE: Why is everybody killing AMD????
By Viditor on 7/23/2006 8:12:44 PM , Rating: 2
quote:
Furthermore, the amount of parallelism that can be extracted via Mitosis is rather limited. Diminishing returns sets in hard on anything over four cores

Ummm...how could you possibly know about diminishing returns on a system that isn't built yet? In addition, from what I've read on the theory, the more cores you have the BETTER your returns...


RE: Why is everybody killing AMD????
By masher2 (blog) on 7/23/2006 10:45:10 PM , Rating: 2
> "Ummm...how could you possibly know about diminishing returns on a system that isn't built yet? "

From the same way one knows about the performance of any processor before it's built-- software simulation.

> "the more cores you have the BETTER your returns... "

You don't understand what's meant by diminishing returns. If you add cores, your performance rises...but by a ever-diminishing amount.

The Intel sims showed Mitosis achieving about a 2.5X speedup on a 4X system, with slightly more than half of that gain due simply to the side-effect of the other cores increasing the cache hits of the primary, from prerequesting data. That's pretty good scaling, but at 8 cores, the results are less impressive---about a 3.5X speedup. I didn't see any 16-core sims, but with that type of curve, it'd probably work out to just under 4X...which means you're only achieving 25% theoretical efficiency.




RE: Why is everybody killing AMD????
By Viditor on 7/24/2006 4:15:36 AM , Rating: 2
quote:
From the same way one knows about the performance of any processor before it's built-- software simulation

How can you do a software simulation when the Mitosis Compiler is nowhere near finished? Mitosis is predominantly a software driven enhancement...
quote:
at 8 cores, the results are less impressive---about a 3.5X speedup

Well, if you're looking at the same data I am (and it sounds like you are), then it's based on an early version of the Mitosis Compiler (Alpha version) from 2005...
Remember that they are still in the "proof of concept" phase for Mitosis, so you shouldn't expect it to look anything like the final product.


By masher2 (blog) on 7/24/2006 10:32:25 AM , Rating: 3
> "How can you do a software simulation when the Mitosis Compiler is nowhere near finished? Mitosis is predominantly a software driven enhancement... "

Software that requires hardware support. Mitosis won't run on current hardware. As for how you sim it, this research paper has the details:

http://portal.acm.org/citation.cfm?doid=1065010.10...

> " it's based on an early version of the Mitosis Compiler (Alpha version) from 2005..."

Not even an "alpha" version...just a research proof of concept. But the point is their simulations show a definite ceiling for the performance benefits of speculative threading. Of course, you could always postulate a breakthrough in basic theory-- but given what we know today, Mitosis isn't going to utilize more than 4-8 cores for a single-threaded process.




By bfonnes on 7/23/2006 11:59:16 PM , Rating: 2
And when they come out with a chip that can read your mind and know what you want to do before you do it, then it will be even faster, lol...


RE: Why is everybody killing AMD????
By cubby1223 on 7/22/2006 1:03:19 AM , Rating: 2
Umm - you're kidding, right?

Why is everybody killing AMD? We're not killing them. However, this is the first time where intel & AMD are entering into a cutthroat price-war where intel has a triple advantage - they've got the superior processor, lower prices, and a far greater bankroll to support these lower prices.

Sure many here want to see AMD succeed, but we're not stupid. Fanboys are those who support one company no matter the situation. And guess what, fanboys make up less than 0.01% of the computer buying market. For the vast majority of people, what is the best ratio of speed, power consuption, and price. Face it, that is why most all of us buy AMD. But tables are turning, for the first time ever, intel will have the speed, power, and price advantage.


RE: Why is everybody killing AMD????
By nerdye on 7/22/2006 1:48:00 AM , Rating: 2
I will always root for the underdog,
amd is fun to root for and I still use an xp 3200,
yet conroe is a beast and is a testiment to intel's
5 year absense from excellence and a short time before
amd strikes with k8l, will hyperstransport 3 be enough
to counter multiple 1333 mgz front side busses from intel in the quad-core market?


RE: Why is everybody killing AMD????
By bob661 on 7/22/2006 2:26:47 AM , Rating: 3
quote:
And guess what, fanboys make up less than 0.01% of the computer buying market. For the vast majority of people, what is the best ratio of speed, power consuption, and price.
And enthusiasts make up 1% of the computer market. J6P doesn't give a rats ass about which processor is faster nor do they care about who makes it. All J6P's care about are:
1. How much does it costs?
2. Can I read email and surf the web?
Anything else puts you in the 1% category.


By othercents on 7/22/2006 2:46:00 AM , Rating: 2
quote:
enthusiasts make up 1% of the computer market


This is so true. Dell sells to businesses and end users all the time. They are not selling XPS machines to these people, but they are selling low cost solutions. This is why most of their mid range computers still use single core processors and ATI x300 video cards or *shiver* Intel GPUs. For most end users this is a perfectly fine machines. Dell didn't even start using Dual Core in the mid range machines until April 06.

Processor speed is secondary to cost. Most end users want to know how much it will cost to surf the web. You build a $200 solution and they will buy it. Maybe AMD will start manufacturing web browsing computers to undercut Dell and Intel.

Other


By plewis00 on 7/22/2006 3:16:02 AM , Rating: 2
What a load of crap. Agreed, Intel made some bad moves in their time but Pentium 4 crap from the start? It was onto a winner during the Athlon XP days. Tricks?! So making your processors complete ops faster is a trick is it? News to me, perhaps we should just go back to the old days of simple processors and no optimization at all... all these extensions allow better processing speeds, Hyperthreading gave an advantage in many applications and caches are used everywhere and I'd still bet the 2mb cache on the smaller Conroe will do its job well. So by your reckoning AMD's 64-bit extensions (which, I'll add, are less used than SSE, et al) and memory controllers are tricks? Because it would seem that way when a processor (Core 2) without an on-die controller trounces the Athlon 64. No, Conroe doesn't correct Pentium 4, it redresses a lot of things and then some. You sound just like Sharikou, all mouth and no substance. You know NOTHING about K8L (as do most people as it's relatively undocumented) and yet you bang on about how great it is. Remember Intel with how great Prescott was on paper, and when it came out it was crap? Maybe K8L won't be, but there is a chance it won't be the saviour you expect. What are you, an AMD shareholder? Conroe isn't a glued processor, it was designed as dual-core from the start so I don't know where you get 'GLUED' from. Prescott/Cedar Mill cores are 'glued' but then, so are Athlon 64 cores, the only reason they work better is the HT link between them. As for the best processor, well, for the consumer it's the fastest and cheapest and then that would be Core 2 Duo. If AMD is so fantastic why are they offering 50% price cuts on their CPUs? Not bad chips by any means but no longer the best. Maybe they won't be running at a loss but when was cutting 50% of your sell price on your product good for profits? I like how you referred to the great conspiracy theory of Intel paying companies not to sell AMD, care to back that one up or did you hear about Dell and decide it was an industry-wide practice?


Quit your crying - you're making me nauseous
By DallasTexas on 7/22/2006 9:27:34 AM , Rating: 2
AMD got their butt kicked. Get over it and quit whining. The better product always wins and nobody cares about your "gee, pls love AMD anyway, BS". Your an embarrassment.

Netburst? It was the probably the most successful products EVER. Selling 500 million of these over 5 years is pretty good to me. Sure, three years later it ran it's course and Intel decided to ride that design for two more years. Guess what, it what clowns like you wanted - more megahertz.

Thank Intel for FORCING AMD to innovate when they refused AMD an X86 license if they copied their split transaction bus. AMD agreed and went their own path to innovation and their own bus. Thank Intel for that. After 25 years of copying Intel, they found out that gee, copying doesn't get me far. AMD has great designers (they hired 1/3 of them from Intel and acquired the rest fro Nexgen). Get a clue, junior, Intel is a cornerstone of American technology capabilities. If it wasn;t for Intel, you'd be speaking Chinese right now.


RE: Quit your crying - you're making me nauseous
By Teletran1 on 7/22/2006 11:08:43 AM , Rating: 1
quote:
by DallasTexas on July 22, 2006 at 9:27 AM

AMD got their butt kicked. Get over it and quit whining. The better product always wins and nobody cares about your "gee, pls love AMD anyway, BS". Your an embarrassment.

Netburst? It was the probably the most successful products EVER. Selling 500 million of these over 5 years is pretty good to me. Sure, three years later it ran it's course and Intel decided to ride that design for two more years. Guess what, it what clowns like you wanted - more megahertz.


The better product always wins....Netburst was the probably the most successful products EVER.

That proves it right there. AMD will be just fine. If Intel can push crappy Netburst(into flames) proccessors on people for the last couple of years then AMD can push thier now crappy K8s on people until they get a new competitive product on the shelves. Most people dont know anything about computers and most of them dont even know what a processor is. Throwing away the Pentium name was a big mistake for intel. Pentium>>>>>>Intel for brand recognition. Core 2 Duo has zero brand recognition. I mentioned it to 5 people at work and they had no idea what I was talking about. Mention pentium and at least some people have a clue.


By masher2 (blog) on 7/22/2006 1:39:08 PM , Rating: 2
> " AMD will be just fine. If Intel can push crappy Netburst(into flames) proccessors on people for the last couple of years then AMD can push thier now crappy K8s on people..."

Err, while I don't doubt that AMD will survive, your logic is flawed. Intel's "crappy" Netburst not only had the best brand name, marketing, sales, and distribution network, but it also did win the occasional benchmark. You can infer nothing about AMD's future from Intel's past.


RE: Quit your crying - you're making me nauseous
By Teletran1 on 7/23/06, Rating: 0
By masher2 (blog) on 7/23/2006 6:05:05 PM , Rating: 2
> "Intel just threw that "best brand name" out the window."

But they *had* that brand during the period you mention. And they _still_ have the Intel name itself. They could call their next chip 'The Stagnator', and it'd still sell, as long as it had the Intel logo on the box.

A company can succeed if it has the best product, or if it has the best sales/marketing/distribution network. When you have *neither* of those...you're in a bad position.

> "You dont have to agree with people but dont be a pissy jerk."

Please accept my apologies if my reply upset you.





By osalcido on 7/22/2006 4:26:22 PM , Rating: 2
quote:
If it wasn;t for Intel, you'd be speaking Chinese right now.


That's the most ridiculous fanboy statement I've ever heard. Congratulations


By Nanobaud on 7/22/2006 5:26:27 PM , Rating: 2
quote:
If it wasn;t for Intel, you'd be speaking Chinese right now.


I do speak Chinese right now. Guess I didn't get enough Intel when I was growing up.


RE: Why is everybody killing AMD????
By ecktt on 7/24/2006 2:35:58 AM , Rating: 2
quote:
The Pentium IV was a flop from start, Intel's new processor only corrects that. Where's Hypertransport??? where's the on-die memory controller and so forth??? I see a HUGE cache, now, let's move to applications that don't use cache very well.</qoute>

You can't be serious or you're just a n00b. While the IPC was low on the P4 when it cam out , it had the high clock speed to more than make up for it and dominate anything in its time. Not to mention the later release of Hyper threading, which made better use of existing silicon and brought a smooth desktop performance to the masses which was previously reserve for the rich. Granted the AMD64 came out hitting hard but the fact it ever since the original Athlon came out, both Intel and AMD have been playing leap frog with each other in terms of performance. FYI even without an integrated memory controller and no point to point interconnecting bus, Conroe seems to be performing quite well. If you asked me, since AMD has to use these 2 feature to perform as well as it does, it just might suggest that AMD64 wasn't that good of a design in the first place. And as for the cache, you obviously have done very little programming (if any).


"We are going to continue to work with them to make sure they understand the reality of the Internet.  A lot of these people don't have Ph.Ds, and they don't have a degree in computer science." -- RIM co-CEO Michael Lazaridis

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki