backtop


Print 62 comment(s) - last by TakinYourPoint.. on Feb 19 at 7:55 PM

AMD's roadmap indicates 2013 will be a big year with a 28 nm exodus, three new CPU cores, and a new GPU core

At Advanced Micro Devices Inc.'s (AMD) 2012 Financial Analyst Day in Santa Clara, Calif., the company's latest CEO, Rory Read, outlined his vision for the firm.  The talk hit on many points, both in terms of the company's roadmap of chip releases and in terms of its long-term technology direction.

From the upcoming Sea Islands family of GPUs to the third generation, 28 nm Bulldozer core, Steamroller, AMD's plans are pretty diverse and ambitious.  Read on to discover more.

I. AMD an ARM Chipmaker, Soon?

Don't expect AMD to be pumping out Cortex-A15 quad-cores this year, by AMD dropped some pretty clear hints that it was mulling testing the ARM waters.  For the uninitiated ARM is one of two main architectures that is today vying for global CPU supremacy.

Traditionally AMD and rival Intel Corp. (INTC) have supported x86, a complex instruction set computer (CISC) style of architecture.  x86 chips have dominated traditional personal computers, outshipping all other archictectures in this segment.

On the other side of the fence is ARM, a reduced instruction set computer (RISC) style of architecture.  ARM chips are produced by third parties based on the instruction set, and often, on the intellectual property (IP) core designs of England's ARM Holdings plc (LON:ARM).  Today ARM dominates the mobile devices space and is the world's most used CPU outshipping x86 chips in quantity thanks to its strong embedded market share.  Top mobile device ARM chipmakers in today's market include Qualcomm Inc. (QCOM), Texas Instruments Inc. (TXN), Samsung Electronics Comp., Ltd. (KS:005930), Marvell Technology Group ltd. (MRVL), and NVIDIA Corp. (NVDA).

Microsoft Corp.'s (MSFT) Windows 8 will be the first mainline version of Windows to support ARM chips in its personal computers.  There's also growing interest in ARM in the datacenter space, thanks to its strong power performance.

At the analyst day AMD showed off a slide, stating:

AMD ARM Chips

AMD IP cores

The slide is hardly subtle.  ISA in this context stands of instruction set archictecture.  So AMD is saying that it is considering non-x86 instruction sets for make SoC (system-on-a-chip) processors for datacenters, SoCs that can use "third party IP [cores]."  

Given that this explicitly describes the ARM chipmaking approach, it seems extremly likely that AMD is considering ARM server chips similar to those being produced by ARM Holdings subsidiary Calxeda for Hewlett-Packard Comp. (HPQ) (the world's largest sever maker).

According to Anandtech's Anand Lal Shimpi, AMD even went as far as to name-drop ARM several times during the presentations, although stopping short of making a specific commitment to ARM.

An AMD defection (at least in part) to ARM would not exactly be surprising, given all the momentum ARM has, and the financial burden it would take off AMD's chip-developing units.  But it would be a major blow to Intel, who would be left in the lonely position of being the lone proponent of x86, pitted against a unified alliance of virtually every other large chipmaker in the traditional and mobile personal computer space.

Could this defection spell the death of x86?  Well, it's far too soon to declare the world's most used traditional PC CPU architecture being put out to pasture, but this is -- at the very least -- a big blow to Intel in terms of confidence in x86.  It will become an even bigger blow when (and if) AMD takes its plans from paper/labs and puts out ARM product.

II. AMD Works to Unify Offerings in Core Design, Memory Access, and Process

Currently AMD makes a mix of CPUs, GPUs, accelerated procesor units (APUs: CPU+GPU), and chipsets.  While AMD discrete graphics sales have solid in recent years after AMD helped turn around its struggling ex-ATI unit, discrete graphics sales as a whole have slowed.  AMD and its GPU rival NVIDIA can blame integrated graphics -- including Intel's increasingly powerful on-die GPUs -- for cutting into sales.

But AMD is looking to make GPU's more of a value proposition by offering hassle-free advance GPU acceleration of everyday programs like web browsers and photo editing.  To some extent these technologies already exist [1][2].  But their quality is hindered by the GPU's high memory latency, and the need to write custom logic in special APIs.

AMD's slides describe a slow unification process between GPUs and CPUs that will help to remove the latency and specialization.  It says the first step landed last year (with Fusion) when AMD deployed chips whose on-die CPU and GPU shared the same power circuitry.

GPU and CPU unification

Next, this year and next year AMD will be slowly giving its GPUs greater access to the CPU's memory pool (cache, RAM).  The final step lands in 2014, something AMD refers to as Heterogeneous Systems Architecture (HSA).  With HSA, the GPU is able to run CPU-like workloads for the first time, meaning that you won't need to recompile or write custom logic to exploit the benefoits of GPU computing.

AMD is also looking to unify its process for various chips down to the 28 nm node by 2013.  Currently AMD offers a mix of 28 nm, 32 nm, and 40 nm offerings.

III. New Core Designs, Server Chips Revealed

AMD only plans to release one 28 nm chip family this year -- the just released Southern Islands GPUs [1][2] (led by the high-end Tahiti family).  The chips is being produced on processes by Taiwan Semiconductor Manufacturing Comp., Ltd. (TPE:2330).

AMD consumer roadmap APUs and GPUs

On its 32 nm (GlobalFoundries) node, AMD will be dropping its next-generation APU, high-power APU Trinity.  Aimed at < 18 mm ultrathinsTrinity will look to take on Intel's Ivy Bridge.  While the Intel chip will clearly have a healthy lead on performance, AMD hopes to beat Intel on everything else -- price, graphics, and power performance (the latter goal might be a bit of a struggle, thanks to Intel's slick power-saving 3D FinFET technology).

Even if AMD can just beat Intel on price and graphics, alone, it may win the sales war (assuming it can produce enough chips).  In our recent poll 35 percent of readers said they would be more interested in a $500 USD or less Trinity ultrathin, versus only 17 percent claiming interest in a $700-$1,000 USD Ivy Bridge design.

On the 40 nm node AMD has two APUs planned -- Brazos 2.0 and Hondo.  Hondo will be aimed at Windows 8 tablets, with a 4.5W TDP.  It will pack 1 - 2 low voltage Bobcat cores and an on-die DX11 GPU.  Meanwhile Brazos 2.0 is expected to see use in budget notebook and netbook designs.  It brings AMD's TurboCore (built-in automatic overclocking) technology, along with support (at last) for USB 3.0.

On the server side, AMD will be sticking with 4/8/12/16 core Interlagos (Opteron 6200 series) chips and 6/8 core Valencia (Opteron 4200 series) chips.  It will fill in the low-end gap, offering in Q1 2012 (by March) Zurich, a 4/8 core chip series with support for only a single hyperthreading channel (vs. 2 in Valencia and 4 in Interlagos).

But in late 2012/early 2013 is where things will get really interesting.  First, in 2013 AMD plans to have another brand new GPU architecture (that's its second in two years!), name Sea Islands.  

But on the CPU/APU front things are getting even more interesting.  Virtually all of AMD's product line will be moving to down to 28 nm (versus its current mix of 32 nm/40 nm).  AMD will also be rolling out three new CPU cores -- Piledriver (aka "enhanced Bulldozer"), Steamroller, and Jaguar.  

Piledriver will land first, sometime in late 2012 or early 2013.  Past slides indicated a 2012 launch, but it's possible this date has slipped, given TSMC's difficulty in scaling up 28 nm production and GlobalFoundries' incomplete transition to the 28 nm node.  

AMD Server roadmap

Piledriver is the direct successor to Bulldozer, and, as is typical, will first see action in server chips.  It will drop in the Socket G34 4/8/12/16 core Abu Dhabi family of Opteron CPUs (the successor to Interlagos), Socket C32 6/8 core Seoul Opteron CPUs (the successor to Valencia), and Socket AM3+ 4/8 core Delhi Opteron CPUs (the successor to Zurich). (AMD's Terramar and Seppang codenames are no more, replaced by Seoul and Delhi.)

Steamroller, acording to past slides, is the direct core architecture successor to Piledriver.  And Jaguar is the direct successor to Bobcat.  Steamroller will be paired with "Graphics Core Next" (GCN) GPU cores -- the same kind found in AMD's Radeon 79xx HD series -- in APU designs.

IV. Consumer CPUs, APUs for 2013

AMD will four families of refreshed laptop and desktop chips (six, if you count laptop and desktop incarnations of the same APU family as separate).  The majority of AMD's new designs are APUs.  Correspondingly, APUs will look to drive the majority of AMD's desktop and laptop shipments in 2013.  

AMD APU and CPU roadmap

On the performance desktop end AMD will drop new 4/8 core Piledriver-based "FX" series CPUs.  Dubbed Vishera, these chips will be AMD's only upcoming non-server line to not include an on-die GPU.  In other words, this is the sole non-APU design in the consumer mix.

On the opposite side of the spectrum (tablets), AMD will drop the 28 nm, 2 watt TDP, Temash APU.  AMD has not clarified whether Temash will include GCN GPU cores or an earlier design (this is an important hole in AMD's roadmap information).  Windows 8 tablets look to be very big in holiday 2012 and 2013, and AMD is clearly hoping to hit a sweet spot in terms of price, power, and performance, staying competitive with ARM and Intel.

(Note: there is a discrepancy between slides... one spells this chip Temash, one spells it Tamesh.  The correct spelling is Temash (like the river in Belize).)

On the mid-range, AMD will deploy Kabini for laptops and desktops.  Based on the Jaguar core -- the successor to Bobcat -- Kabini will come in dual-core and quad-core variants.  It will pack GCN GPU cores, as well, onto a brand-new system-on-a-chip die, which will also be utilized as Temash's die.  The die has a built-in "Fusion Controller Hub" (FCH), AMD's bridge circuitry that controls talk between the on-die CPU, GPU, and external I/O devices (RAM, external channels, PCIe, etc.).

Lastly, AMD's 2013 high-end APUs will be code-named Kaveri and will come in dual-core and quad-core desktop and laptop variants.  Kaveri will pair Steamroller CPU cores with GCN GPU cores, like Kabini.  

For fans of knowing where these crazy codenames come from:
SERVERS:
Abu Dhabi -- The capital of the United Arab Emirates and richest city in the world.
Seoul -- The capital and largest city in South Korea, largest metropolis in developed world.
Delhi -- The capital city of India, second largest city in India, and eight largest in the world.

CPUS:
Vishera -- A Russian river in the Ural mountains

APUs:
Llano -- A Texas River.
Trinity -- A trio of holy figures in traditional Catholic Christianity, the name of several cities, and the name of the female protagonist of The Matrix series.
...Also, apparently, this is the name of rivers in California and Texas.
Kaveri -- A large river in southern India.
Brazos -- A river in Texas.
Kabini -- Another river in southern India.
Hondo -- Another river in Belize.
Temash -- A river in Belize, a Carribean-facing Central American nation.

GPU Cores:
Northern Islands -- Line of Marianas islands in the Pacific Ocean.
Southern Islands -- A series of islands in Singapore.
Sea Islands -- An oxymoronically entitled chain of barrier islands off the coast of South Carolina, Georgia, and Florida.
Tahiti -- An French island in the Pacific.

CPU Cores:
Bulldozer -- A large machine with a flat shovel scoop attach to the front to push earth.
Bulldozer graphic
[Image Source: (original: AnandTech; modifications: DailyTech/Jason Mick]

Piledriver -- A hydraulic piece of machinery that pushes pillars into the ground for large structures.
Steamroller -- A vehicle with a heavy metal cylindrical roller, used to flatten concrete or earth.
Excavator -- Similar to the "Backhoe", this heavy machine is capable of digging with a scoop shovel.
Excavator Core wide
[Image Source: (original: AnandTech; modifications: ArsTechnica]

Kabini -- Another river in southern India
Temash -- A river in Belize, a Carribean-facing Central American nation.
Bob -- A river in Belize, a Carribean-facing Central American nation.
Temash -- A river in Belize, a Carribean-facing Central American nation.

IV. Remaining Questions

Overall AMD's strategy of pursuing accelerated development of APUs (no pun intended!) -- seems wise given that they were its biggest success in 2011.  AMD has essentially conceded the die-shrink race to Intel, partially because it is out of its hands (as AMD has switched to third party fabs).

But AMD is wagering that Intel has overdelivered in the CPU department with Ivy Bridge, producing a powerful chip, but one that is too expensive to appeal to the majority of consumers.  If AMD is right (which it may be), it will buy itself time, as Intel won't get out cheaper Atom-based 22 nm parts to 2013.  In this sense AMD will go from pitting 32 nm parts versus 22 nm parts to pitting 28 nm parts versus 22 nm parts -- a slightly less bleak competition.

There are some big questions left by the roadmap, though:
+What is the timeframe for third party IP core-based server SoCs?
+Will the 32 nm Vishera drop in Q1 2013, versus Q3 2013 drops for Kabini, Temash, and Kaveri?
+Will Temash include GCN or some other graphics architecture?
+What is the status of the Steamroller-successor Excavator (Bulldozer Gen. 4)?

+Does Trinity pack a Bulldozer or a Piledriver (Bulldozer Gen. 2) core? 

Overall, AMD deserves some praise.  Despite dipping into the red, it has handled the messy transition from a first-party chip manufacturer into a fabless chipmaker better than expected.  It delivered two compelling products in 2011 -- Southern Islands and Fusion.

2012 looks to be a bit of a slower year for the company, as it fleshes out its product lineup.  Probably the biggest single launch will be Trinity, as it promises strong sales if AMD and its partners can deliver high-volume sub-$500 ultrathins.  In close second will be Southern Islands, aka Radeon 7xxx HD series GPUs, which -- for now -- enjoy a key time-to-market lead on NVIDIA's upcoming Kepler (GeForce GT_ 6xx/7xx) family.

2013 looks to be a far, far bigger year for AMD, though, with the new Piledriver and Jaguar families dropping early, and the Steamroller family dropping late.  Sea Islands will also land somewhere in the mix.  And, most importantly, 2013 will be the timeframe for AMD's big exodus down to 28 nm.  It is critical that AMD, TSMC, and Global Foundries deliver on 28 nm in 2013 -- their cumulative fate may depend on it, in the face of a hungry Intel.

Remember, these roadmaps, though, may change depending on how things go.

[All slides are courtesy of AMD via Anandtech.]

Source: Anandtech



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Windows.
By drycrust3 on 2/6/2012 1:51:29 PM , Rating: 2
quote:
Microsoft Corp.'s (MSFT) Windows 8 will be the first mainline version of Windows to support ARM chips in its personal computers.

The biggest reason the PC / x86 market exists is because of what runs on Microsoft Windows. If ARM is going to supplant the x86 based CPU as the CPU of choice, then it needs to have an operating system that is as easy to use as Windows XP, and it needs to run the applications, or viable alternatives, that run on Microsoft Windows whatever.




RE: Windows.
By JediJeb on 2/6/2012 2:39:28 PM , Rating: 2
I think the article mentions that Windows 8 will be ported to the ARM architecture, so that means ARM and x86 will be able to directly compete with each other in a PC platform.


RE: Windows.
By vignyan on 2/7/2012 10:14:46 AM , Rating: 2
Not quite. Most of the BIG software vendors may not want to recompile their software for ARM. So, most software support is still with the x86. Windows 8 for ARM is purely to compete against iOS/Android.

Windows sales were severely hit by the tablet market. And tablet market does not use x86(not a major chunk as of now). So, Microsoft's decision to support ARM - mainly for the tablet market.

If you want to do some serious work, you will still get back to x86 based systems (at least for the near future)


RE: Windows.
By Arsynic on 2/7/2012 11:31:48 AM , Rating: 2
Microsoft is currently working on a solution to easily port x86 code to ARM.


RE: Windows.
By someguy123 on 2/6/2012 4:20:40 PM , Rating: 2
Biggest reason x86 exists is because of legacy software support. RISC processors have the benefit of not dealing with x86 bloat, which made them faster in the past (PPC) or now more power efficient (ARM). Continued shrinks should make the x86 negligible, though. If there were to be a shift away from x86, it should've been done a long time ago. Now it isn't quite as significant with intel moving to 22nm.


RE: Windows.
By StevoLincolnite on 2/7/2012 11:13:30 AM , Rating: 2
quote:
RISC processors have the benefit of not dealing with x86 bloat


As you scale down in manufacturing process, the smaller the percentage it costs in terms of die-area; to retain x86, x64, SSE2, SSE3 and all the other tid bits, to the point of it being a non-issue eventually.

For example, the original Pentium processor debuted with around 3.1 million transistors, that includes x86 compatibility and MMX in that transistor budget.

Let's be generous and say that half that transistor budget cost was MMX and x86 "bloat", that's 1.55 million transistors.
Then you grab a processor like the Core i7 3960X which has 2.27 billion transistors... Then you realize how little die area x86 and MMX compatibility actually takes. (Hint: Far less than 1%).

Mind you, it gets more complicated as you add in SSE, SSE2, SSE3, SSE4, 3D Now!, x64 plus others.
But in a few die shrinks, the percentage they take up in terms of die-area will be negligible, however the backwards compatibility benefit with a few decades worth of software is simple huge.


RE: Windows.
By someguy123 on 2/8/2012 5:36:48 PM , Rating: 2
right...did you not finish reading my post before posting?


RE: Windows.
By daneren2005 on 2/8/2012 6:11:05 PM , Rating: 2
But they aren't faster...ARM is more power efficient because it is a completely different ISA designed for power efficiency, not performance. Yes being RISC helped that but you can create a CISC ISA which is power efficient as well. But because ARM is designed from the bottom up to be power efficient, it has had a lot of trouble scaling in the performance arena. In the next few years you might even see 2 GHz quad core phones, but those 2 GHz cores will not even come close to competing with even a mobile (notebook, not phone) intel processor at the same clock speed, let alone a desktop processor at the same clock speeds. In order to get the higher performance you need to add things to the architecture that take us space and increase power usage. ARM isn't some magical silver bullet that ignores that.


RE: Windows.
By someguy123 on 2/8/2012 6:40:05 PM , Rating: 2
Right. I never meant to say that ARM is faster. I was trying to differentiate PPC, which at a point was faster than x86 pentium, and ARM, which is more power efficient than comparable low power x86 cpus like atom.


RE: Windows.
By phazers on 2/11/2012 8:41:16 AM , Rating: 2
I agree with that. THG had a pretty interesting article on how ARM in general, and Qualcomm in particular, are probably going to have to go to out-of-order processing and other decades-old Intel/AMD tricks to squeeze out more IPC. Essentially they will be venturing into areas where AMD and Intel have not only tons of design experience, but also a lot of IP.

I can see AMD wanting to keep all their options open at this point, but it's telling that they have yet to formally sign on to the ARM camp...


RE: Windows.
By Jeffk464 on 2/6/2012 5:14:01 PM , Rating: 2
wow, where have you been.


Excavator
By mjdaly on 2/6/2012 12:35:13 PM , Rating: 5
Not to nit pick here, but last I checked, excavators were not commonly referred to as backhoes. They are, in fact, two completely different machines. An excavator usually sits on a pair of tracks, can rotate a full 360 degrees, and has an offset cab with a digging arm. A backhoe on the other hand is a wheels, rather than tracks, a fixed cab, a front loading shovel, and a rear articulated digging arm. They are quite different.

This is the second time within the last 7 days that I have noticed such a statement. The guided bullet article asks readers to not be confused with the "M2 carbine light rifle version" when referring the the M2 heavy machine gun. It is not a version, it is a completely different firearm. Should we also not confuse the M1 Garand, M1 Carbine, or the M1 Abrams tank as they all use the Model 1 designation?

</end rant>




RE: Excavator
By Uncle on 2/6/2012 12:46:57 PM , Rating: 2
I back you on that. I repair them. People in the states have a hard time understanding what I do for work when I say I'm a Heavy Duty Mechanic.


RE: Excavator
By kattanna on 2/6/2012 1:59:03 PM , Rating: 5
you could always just tell people you service hoes for a living..

that would get a couple looks LOL


RE: Excavator
By RjBass on 2/6/2012 8:27:45 PM , Rating: 2
Give this man a 6!!!


RE: Excavator
By Iketh on 2/7/2012 4:45:34 PM , Rating: 2
Where do I study to become a hoe serviceman?


RE: Excavator
By Jeffk464 on 2/6/2012 5:13:09 PM , Rating: 1
Yes I would have figured you eat to many burritos.


About ARM
By vignyan on 2/6/2012 3:13:00 PM , Rating: 2
Hey Jason,

I have seen about this ARM stuff from you and other DT editors a lot. I do think x86 is a pretty crappy ISA. That said, it has been significantly successful - whatever may be the reasons. However, there is no significant benefit that ARM gains with the micro-ops in place. A high performance ARM Vs x86 will have the scales pointing to x86.

x86 licenses are not available and ips are strictly protected and expensive. That's the reason they are not as popular as ARM. However, the recent demo of Medfield platform should have made you question about the ARM's advantage. You can read more in Anand's blog here:
http://www.anandtech.com/show/5365/intels-medfield...
Granted ARM cores are well designed for low power, but you can see that single x86 in-order core beats dual core out-of-order A9 cortex in the performance and power(although, only two benchmarks.

I think your limited vision as x86 being CISC and indication that all implementations are inherently inferior to the RISC implementations is flawed. I hope to see this corrected in the future. I know x86 programs have better cache misses, loop buffer usage and lower penalties due to their ISA. Their key disadvantage might be the decode block, but rest of the units are equally matched with ARM. And ARM decode is not that clean either (with their inline shifts!).

I also think people are jumping to conclusions when they think AMD is going ARM way - I seriously think there are talking about other IPs for SOC rather than the CPU ip... Divergent ISA could mean a better binary translation layer being supported.




RE: About ARM
By Jeffk464 on 2/6/2012 5:22:12 PM , Rating: 2
It really doesn't seem like any of the ARM producers are looking to take intel on in the high end laptop/desktop segment. I don't know if that will ever happen, ARM looks to out compete intel in smart phones, smart TV's, ultra portable's etc. Time will tell if Intel will be able to compete in these segments with a low power and inexpensive architecture. Qualcomm looks to be the clear leader for 2012 with its S4 Krait line of chips.


RE: About ARM
By vignyan on 2/7/2012 10:09:24 AM , Rating: 2
Agreed that ARM out performs Intel or other x86 processors in the mobile markets right now. However, it is not because of the ISA. Its mainly the low power, low performance alternatives have been really good from ARM. With performance demand rising for the mobile market, I am not sure that ARM would be an obvious choice as it was before.

I am not siding any company and I ask people to do the same.


RE: About ARM
By Taft12 on 2/7/2012 10:50:58 AM , Rating: 2
quote:
With performance demand rising for the mobile market, I am not sure that ARM would be an obvious choice as it was before.


Performance demand is rising, but *actual* performance is rising right along with it. Clockspeed and core count of ARM chips is rising at an astounding pace and I see no reason why that won't continue.


RE: About ARM
By vignyan on 2/7/2012 3:04:12 PM , Rating: 2
quote:
Clockspeed and core count of ARM chips is rising at an astounding pace


ARM maxes out at 1.5GHz for mobile (A15 processor. Which is same as A9 and is up from 1.2GHz from A8. So, not *astounding*.
Core counts though have been picking up. Present A9 cores can be up to 4 cores. A15 keeps the limit (although brings Coherency control). Still, I think ARM did have a good ramp-up on the multi-core - Thanks to its partners doing most of the work in software to use their non-coherent A9 cores.

While the performance of the ARM processors kept rising, the power did not remain constant. Single core A8 to Dual core A9, power increased by 0.3W from 0.8W to 1.1W.

So my point still holds - Is ARM still an obvious choice for high performance mobile market. I am confident that it will still be one heck of a competitor, but can it squish x86 in power consumption while delivering equivalent performance?


RE: About ARM
By Jeffk464 on 2/7/2012 3:33:01 PM , Rating: 2
Hopefully smart phones won't go the route of wintel in the 90's of ever increasing bloat demanding higher and higher performance.


RE: About ARM
By someguy123 on 2/8/2012 5:48:37 PM , Rating: 2
You've got that backwards. It started off bloated with x86 and larger process technologies. As it scaled down so did the x86 hit overall. IBM surpassed wintel with their risc PPC when the x86 hit was still substantial, but it didn't take over the market and (because?) was limited to macintosh machines.

Looking at ARM's development trends now they're basically increasing power draw with every new highend processor. It doesn't look like they can keep up the performance improvements for very long if they want to stay under intel's atom power envelope.


Powerful GPU
By MDme on 2/6/2012 3:47:29 PM , Rating: 2
While AMD discrete graphics sales have solid in recent years after AMD helped turn around its struggling ex-ATI unit, discrete graphics sales as a whole have slowed. AMD and its GPU rival NVIDIA can blame integrated graphics -- including Intel's increasingly powerful on-die GPUs -- for cutting into sales.

I stopped reading after this....




RE: Powerful GPU
By Trisped on 2/6/2012 8:15:48 PM , Rating: 2
I also took exception with that particular paragraph, though I read the whole article.

To me, it doesn't make sense that AMD turned ATI around. Six months after AMD bought ATI it was ATI which was making a profit while AMD's CPUs and other areas were losing money. This wasn't the result a quick management changes by AMD, but the hard work of the ATI people over the previous 1-3 years.

The other issue I have is with "... AMD and its GPU rival NVIDIA can blame integrated graphics -- including Intel's increasingly powerful on-die GPUs -- for cutting into sales..." Intel is always coming out with a more powerful integrated processor which it (Intel) says will remove the need for other integrated solutions, or add in cards. Every time Intel does come out with a new solution it is much better then anything they made before, but 1-2 years behind AMD/NVidia's current integrated solutions (and not comparable to discrete add in cards). The problem isn't Intel, it is the lack of demand. Most people want it to run on their phones now, and don't care about high power PC video processing.


RE: Powerful GPU
By TakinYourPoints on 2/7/2012 4:29:18 AM , Rating: 2
Why? It is a fact that more laptops are dropping discreet GPUs to accommodate slimmer chassis. On top of that, Intel's IGPs have gotten much much faster. They are more than fast enough for desktop applications (what most people use their laptops for), and are even fast enough to run games like Team Fortress 2, WoW, LoL (ew), and Starcraft 2 on the 11"-13" laptop screens that those CPUs end up driving.

Forget that though, let's disregard gamers for a moment. How much GPU horsepower does a businessman need to run an Excel spreadsheet? You can even run Final Cut Studio on a Macbook Air, no problem.

The upcoming Ivy Bridge is even faster, and Haswell is supposed to be an even bigger boost.

Intel's integrated GPUs used to suck but they are making discreet mobile GPUs less necessary. People want lighter and thinner laptops in increasing numbers, and with better IGPs the final barrier for a few people (mobile PC gaming) will also fall away. Hell, they are mostly gone for the most part IMHO, now I'm tempted to install the Diablo 3 beta on a friend's MBA.

Anyway, to ignore the impact Sandy Bridge and Ivy Bridge are making on discreet mobile GPUs is ignoring reality. Discreet GPUs will continue being used on performance 15"-17" 1080p gaming machines, absolutely, but for most people the increasingly fast Intel IGPs will be more than enough.


RE: Powerful GPU
By theapparition on 2/7/2012 11:47:48 AM , Rating: 2
Well said. Too many here get tunnel vision and don't see the big picture with regards to integrated graphics. Intel's solutions, while far from anything spectacular, are good enough for 90% of typical use.


RE: Powerful GPU
By TakinYourPoints on 2/8/2012 4:58:55 AM , Rating: 2
quote:
Too many here get tunnel vision and don't see the big picture with regards to integrated graphics.


I think you can go much much broader with that. :)


RE: Powerful GPU
By Iketh on 2/7/2012 4:42:53 PM , Rating: 2
You're not alone. I cringe every time I read a technical article about processors by Jason Mick.


Other observations
By rocketbuddha on 2/6/2012 1:10:05 PM , Rating: 2
a) Brazos is a river in Texas.
b) Trinity can also be the Hindu religion
Brahma - Creator
Vishnu - Protector
Shiva - Destroyer
c) Looks like AMD is skipping SOI for its high volume processors and Low power processors which in the future would be all APUs
d) There was no information on 28nm SOI which explains why the server MPUs as well as Vishera (Desktop/WS version of the server MPU) still would be in 32nm SOI . How Vishera would compete with a 22nm IvyB, I have no idea..
e) Unless GF surprises us by providing a 20nm SOI, I see AMDs SOI road-map dead in the water.




RE: Other observations
By StevoLincolnite on 2/7/2012 10:59:48 AM , Rating: 2
Well.

AMD "Fusion".
So Trinity could be reference to the Trinity Nuclear test to.


RE: Other observations
By BaronMatrix on 2/11/2012 3:04:23 AM , Rating: 2
Optical shrink anyone? Why go with an SOI GPU just to change to BULK? It would have been simpler to make the CPU BULK to begin with. IBM is talking up 20\14nm with no word of going BULK. It has also been said on materials sites that everyone HAS TO go FD-SOI below 14nm - even Intel.


Brazos reference is wrong.
By sviola on 2/6/2012 12:27:09 PM , Rating: 3
Brazos is not a river in Brazil. It is a river in Texas.

Here is the link:

http://en.wikipedia.org/wiki/Brazos_River




Am I the only one...
By Warren21 on 2/6/2012 5:56:42 PM , Rating: 3
That couldn't help but notice the two people arm wrestling are women?

What has been seen cannot be unseen.




Other observations
By rocketbuddha on 2/6/2012 1:10:11 PM , Rating: 2
a) Brazos is a river in Texas.
b) Trinity can also be the Hindu religion
Brahma - Creator
Vishnu - Protector
Shiva - Destroyer
c) Looks like AMD is skipping SOI for its high volume processors and Low power processors which in the future would be all APUs
d) There was no information on 28nm SOI which explains why the server MPUs as well as Vishera (Desktop/WS version of the server MPU) still would be in 32nm SOI . How Vishera would compete with a 22nm IvyB, I have no idea..
e) Unless GF surprises us by providing a 20nm SOI, I see AMDs SOI road-map dead in the water.




By BSMonitor on 2/6/2012 1:18:26 PM , Rating: 2
quote:
Even if AMD can just beat Intel on price and graphics, alone, it may win the sales war (assuming it can produce enough chips). In our recent poll 35 percent of readers said they would be more interested in a $500 USD or less Trinity ultrathin, versus only 17 percent claiming interest in a $700-$1,000 USD Ivy Bridge design.


So A, either AMD is giving away its APU's. Or you are changing the specification for what an Ultrabook is. Either way, I call BS.




Dual architecture
By mycropht on 2/7/2012 3:59:17 AM , Rating: 2
The next thing will be ARM and x86 cores together on a die. The ARM core(s) for browsing and simpler work and x86 core(s) for the CPU intensive applications (Skyrim?). Of course, the code for both platforms would work, too.
The sad thing is that someone will patent this simple and obvious idea and will try to extort money from the guys capable of producing the thing.




ARM IP on an APU?
By BaronMatrix on 2/11/2012 2:52:07 AM , Rating: 2
I don't know why everyone assumed that AMD was talking about ARM cores on AMD chips. IP is anything. PCIe, LAN, USB, WiFi, perhaps placing low power cores in like nVidia does with Tegra 3. They could even use any type of FPGA like accelerator for AES, HTPC, rendering, or even some type of high speed buffer memory.
The possibilities are endless. But it is correct that 2013 will be a banner year for AMD as they already have the upper hand in the Win8 tablet wars and even the Ultrathin market. If the Q4 Trinity improves upon the 25% over Llano(CPU) 50%(GPU), the lower power models will be perfect for x86 Transformer-type machines.
I don't think AMD was ever trying to "keep up" with Intel, but keep the cost benefit as close as possible.

But then that's just my opinion.




All this effort.... foiled by lousy drivers
By morgan12x on 2/6/12, Rating: -1
By Basilisk on 2/6/2012 12:46:21 PM , Rating: 3
I agree that AMD has some driver-delivery issues: twice in the past two years I've updated AMD drivers and ended up with "focus" issues wherein my left-button seemed disabled or took incorrect actions. [Of course, I was dumb enough to bury both those AMD updates amid a score of MS updates, initially confusing the cause.] Each time an eventual delete and re-install fixed the issues, so I won't damn the drivers themselves. I'll continue to buy AMD products when appropriate.

OT: Get some sleep, Mick. This article hasn't been proofread to your usual standards. Also, note that Trinity is another TX river as well as one in CA, each of some historical note. And "sea island" is no more an oxymoron than "sea bird"; "sea" establishes the locale and it's just another term for a Georgia "barrier island".


RE: All this effort.... foiled by lousy drivers
By mjdaly on 2/6/2012 12:50:48 PM , Rating: 2
Let me get this strait. You purchased a laptop with one of the new Fusion based processor, then tried to upgrade the drivers. Would it be safe to assume that you downloaded them from the AMD website?

You should be aware that AMD, like nVidia, allows laptop manufacturers to add custom features to their laptops and thus, both do not support drivers on laptops directly. You have to go to the laptops manufacturers website to get new drivers as there may be features that were added that the normal drivers do not have support for. This has been the case with drivers on all laptops with either AMD or nVidea graphics for a LONG time. You can get "stock" mobile drivers from time to time, but there is no guarantee that they will work, depending on what the laptop manufacturer did.

Don't blame AMD, or credit nVidia for something you did not know.


RE: All this effort.... foiled by lousy drivers
By leexgx on 2/6/2012 1:34:06 PM , Rating: 2
no this is just purely AMD fault here

amd laptop driver downloader will not even start to big download on an laptop unless its on the supported list

desktop driver will not load on an laptop (it install the CAT thats all but not the driver part)

if he reloads windows and installs the up to date driver he find it not BSOD, AMD suck at driver updating


By mjdaly on 2/6/2012 1:51:39 PM , Rating: 2
No, they don't. You do not seem to understand that both AMD and nVidia allow laptop makers to add custom features to their laptops. That approved list is one that contains all laptops with stock features. Neither AMD nor nVidia directly provide driver updates for those laptop makers that customize their chipsets with additional features. The only driver updates you get with these manufacturers is what they provide after taking the stock AMD or nVidia driver and adding feature support for what they changed. This might be once or twice a year if you are lucky.

I have an HP 8510p with an AMD HD2600 GPU. HP added additional features for power saving and voltage regulation (as far as I can tell) and the stock Mobility drivers from AMD do not work. This is HPs fault for mucking up the design with extra features, not AMDs. In his case, the laptop maker deviated from the standard design for one reason or another and the stock drivers will not work. It is the job of the manufacturer to deal with this, NOT AMD. I have had several nVidia based laptop products as well. They are the same.

And a desktop driver should not load on a laptop anyway.


RE: All this effort.... foiled by lousy drivers
By Beenthere on 2/6/12, Rating: 0
RE: All this effort.... foiled by lousy drivers
By tecknurd on 2/6/12, Rating: 0
By Trisped on 2/6/2012 8:21:23 PM , Rating: 2
As a long term fan of the AIW cards I can say that yes, the software was very bad.

That being said I have seen problems with both ATI, AMD, and NVidia in the software/driver department.

Personally, I think it is an example of the state of competition. If there is less competition on the hardware side then the Software seems to get better. When their is lots of competition on the hardware side the software gets buggy and annoying.


RE: All this effort.... foiled by lousy drivers
By Motoman on 2/6/2012 8:33:27 PM , Rating: 1
You are apparently incompetent. I've been building PCs since Windows 95 came out - have build hundreds. Never had any problems with AMD or ATI drivers. Never.

The one and only time I could ever categorically say that a given device/driver was crap was on the NForce3 chipset. I was huge into Nvidia at that time...but every single machine I sent out with an Nforce3 chipset failed. Every single one. And each and every one eventually got a new motherboard with a non-Nforce3 chipset.

But that was it. The one and only time. You trying to persuade the rest of the world that there was or is something endemically bad about ATI or AMD drivers/chipsets is laughable. You should probably just set that screwdriver down and back away from the computer...


RE: All this effort.... foiled by lousy drivers
By tecknurd on 2/7/2012 12:28:13 AM , Rating: 2
There are two different factors. Computers that you built for yourself and computers that you built for someone else. I dare you call the people up that you built a computer for that had an ATI card in it. They will probably give you a cold shoulder and do not care to give you the finger. A good business only starts if the same customers keeps coming back for more.

Using this same tip is the reason I do not go back to the computer store that force me to go with Intel instead of AMD because the computer store people think that the problems that I had was a fault of AMD. News flash my problems were capacitors at the time but they did not know that. I am not going back to that store and I will not recommend that store to anybody. The nForce3 chipset issue probably was not the problem. It might be something else like the brand of the motherboard. Though nobody forced you to select an nForce3 chipset like the computer store force me to go with Intel instead of AMD.

You are telling me that I force you to go with nForce3. I did not. All I said and meant is that the software for 780G chipset and ATI graphics cards have pathetic software. The hardware runs well and probably better than nVidia's if good software is used like software from Xorg. This means use Linux and do not use AMD close-source drivers (fglrx). If I am going to use Windows, I would either use Intel graphics or nVidia.

I dare you say the same exact thing to my face because saying it online is completely different since it acts like a buffer for insecure people. I have problems with ATI and AMD software and graphics drivers. Other people have the same issues, so it is not my screw driver that is causing me problems.


By TakinYourPoints on 2/7/2012 4:13:11 AM , Rating: 2
AMD/ATI graphic drivers consistently have more problems than NVIDIA's. Even now I have friends in DOTA 2 and Battlefield 3 who are having problems, and the thing they all have in common are AMD cards. The same was obviously the case with Rage, massive issues with AMD drivers, but it was a crap game so I can't be bothered to think too much about it. :)

The only time I had BSOD problems in Windows XP were due to Radeon drivers. Switching back to NVIDIA was such a relief. It is ridiculous how consistently bad AMD drivers are while NVIDIA manages solid driver updates in time for everything.


By Kurz on 2/7/2012 9:42:09 AM , Rating: 2
I guess you haven't heard of Nvidia Drivers 196.75


RE: All this effort.... foiled by lousy drivers
By Motoman on 2/8/2012 11:57:46 AM , Rating: 2
You're wrong. Mind-bendingly wrong. And the fact that you assign blame for your 2 buddies problems to the fact that they both happen to have an ATI/AMD card is exactly the kind of dumbassery we expect from Macolytes like you.

I can guarantee you beyond the shadow of a doubt that Nvidia drivers are not better than ATI/AMD - and that any problems you personally have are almost certianly due to the fact that you're an idiot.


By TakinYourPoints on 2/19/2012 7:55:55 PM , Rating: 2
You are in such denial it is amazing. These are widespread issues, just check out support forums, but no, it is obviously the fault of the user and not the fact that AMD has had consistent driver issues with games for nearly a decade.


By Motoman on 2/8/2012 11:55:42 AM , Rating: 2
quote:
I dare you call the people up that you built a computer for that had an ATI card in it. They will probably give you a cold shoulder and do not care to give you the finger. A good business only starts if the same customers keeps coming back for more.


Essentially no one that has bought a computer from me ever buys one from anyone else ever again. There are no such problems. Period.


By Motoman on 2/8/2012 12:00:17 PM , Rating: 2
Oh, and:

quote:
I dare you say the same exact thing to my face because saying it online is completely different since it acts like a buffer for insecure people. I have problems with ATI and AMD software and graphics drivers. Other people have the same issues, so it is not my screw driver that is causing me problems.


I'd be happy to. If you think your problems are due to ATI/AMD drivers you're a catastrophic moron. "Other people" have problems with *everything*. There is essentially no chance you have more experience with both Nvidia and AMD/ATI drivers than I do, and I can categorically guarantee you that there is nothing about either of those drivers (except Nforce3) that is endemically bad. But you, sir, appear to be an idiot.


Die shrink is far less important today
By Beenthere on 2/6/12, Rating: -1
RE: Die shrink is far less important today
By Ringold on 2/6/2012 3:02:22 PM , Rating: 1
Nice sentiment, but when Intel is executing well both with the architecture and with die shrinks it puts AMD in a bad place. Your position only is competitive when there is no other competition.

Especially consider if the tech literate lead public opinion. Who that frequents Anandtech, TechReport, [H] or others would recommend a mainstream or high-end computer with 32nm AMD when a better performing, more power efficient 22nm Intel alternative is available?


RE: Die shrink is far less important today
By Jeffk464 on 2/6/2012 5:27:35 PM , Rating: 2
There are uses where Llano will beat Sandy Bridge if you are relying on the built in graphics. But yeah if you have a discrete graphics card there isn't much to recommend AMD.


By rocketbuddha on 2/6/2012 4:49:08 PM , Rating: 2
Important things are

a) Power consumption
b) Performance at the TDP
c) Die-size
d) Yields

a) and b) mildly related; c) and d) closely related.

Currently at 32nm
AMD Disadvantages
AMDs chips have a greater die-size than Intels
AMDs chips have more Metallic layers
AMDs SOI based top-end puts more heat than SB. (125 W vs 90 W)
AMD clocks its MPUs at a higher default frequency. Greater power consumption
AMD performs lower on the CPU side of things.
AMD has less than satisfactory yield in GF (though GF's APM process progressively improves yeild overtime). This resulted in a Llano that was welcomed but not yielding in volume enough to satisfy all.
Wafer cost for SOI is higher than bulk.

Advantages
The Llano chip is far more balanced. (but yields need to improve further)
When targeted at the right market, Llano is a wonderful replacement to SB based APUs. (e.g) Small Form Factor and HTPCs.
When Software makers utilize the GPU Acceleration and Open CL appropriately for their software, things can get far better.
http://www.tomshardware.com/reviews/opencl-simhd-v...

Future
If Intel develops IVY per schedule, the at 22nm Ivy will
a) Have better performance than SB (even if not huge increases)
b) Smaller die-sizes than current SB
c) Better yields due to smaller die-size.
d) Better power characteristics improving the lead. (This is what Intel is targeting the initial Ivy MPUs to be)
e) Improved Graphics capability

Agreed that their Integrated Graphics will still suck compared to AMD but they are improving over whatever they have now.
Also since FinFET is something attempted first, they may come across issues that they have never come up before that might impact yields.

Forget diminishing returns, I do not see anyway AMD will be in a better condition compared to now unless
a) PileDriver magically increases per core efficiency
b) Simultaneously PD based cores result in a smaller die-size and correspondingly higher yields.
c) PD uses lesser power than BD at the same node.


"My sex life is pretty good" -- Steve Jobs' random musings during the 2010 D8 conference














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki