backtop


Print 47 comment(s) - last by talonvor.. on Oct 23 at 7:50 AM

CEO says next generation 14 nm "tock" architecture refresh (Skylake) will not be delayed

Intel Corp. (INTC), which reported its earnings yesterday, gave investors an unwelcome surprise as well when it announced its first 14 nm chip, codenamed Broadwell, is being delayed by a quarter.

I. A Minor Slip

Intel was supposed to start shipping Broadwell chips to OEMs in Q4 2013 (this quarter) giving them time to integrate the chip into their new notebooks, laptops, desktops, and tablets.  Instead the new chips will start shipping in Q1 2014.

Currently, Intel's premium Core Series-branded Haswell chips are produced on a 22 nm process.  But moving from 22 nm node to 14 nm has proved trickier than Intel expected.

IDF 2013
[Image Source: Jason Mick/DailyTech LLC]

Normally a die shrink involves years of prototyping technologies.  When a production process is mature enough to come out the laboratory, it's installed at the fab.  Equipment must often be updated or replaced to handle the new process once it's moved to a full-scale fab on its final path towards production.  At that point tests runs are executed to study the efficacy of the fab hardware and methodology for the node, typically at lower volumes that the final production runs.

It is typical to find problems with the process at that stage, and it is up to the engineers to institute a set of process method and hardware changes to minimize those defects.  When the fixes are in place, a semiconductor manufacturer crosses its fingers, metaphorically speaking and moves to volume production.
Haswell shot
Intel currently owns 95 percent of the server market (Haswell is pictured).

This approach typically works out for Intel, but this time something went wrong.  The fix failed to minimize the number of wafer defects to the extent expect, and as a result a relatively high percentage of Intel's CPU were turning out dead on arrival.

Intel's new CEO Brian Krzanich insists this is not uncommon stating, "We have confidence the problem is fixed because we have data it is fixed.  This happens sometimes in development phases like this. That's why we moved it a quarter.  We have a strong desire to get Broadwell to market. If I could, there'd be nothing slowing me down. This is a small blip in the schedule, and we'll continue on from here."

In Intel's internal jargon, Broadwell is a so-called "tick" release, which involves simply shrinking the previous processor architecture (Haswell, in this case), while possibly adopting some minor tweaks/improvements based on lessons learned.  By contrast, the next generation after Broadwell will be a new architecture, called a "tock".  That "tock" release -- Skylake – will not be delayed, Mr. Krzanich stated, from its prospective 2015 ship date.

II. Intel Enjoys Healthy Process Lead, but Its Chips are Slower than Samsung's

The delay of Broadwell is a bit of bad news, but it could be much worse; the earnings report for Intel was otherwise better than expected.  And while Intel has yet to establish a strong presence in the smartphone and tablet markets (unlike the server and traditional PC markets that it dominates), it is starting to attract interest in the mobile space, thanks in part to its process lead and the intrinsic power savings that lead provides.

Intel 14 nm fab
Intel is producing 14 nm chips at three key fab facilities. [Image Source: Intel]

Despite the setback, that lead remains relatively large.  International Business Machines, Inc. (IBM) in August soft launched 22 nm Power8 architecture high-end server chips and licensable architecture.  However, it's still working with partners to try to bring that chip to the market in physical form.  Taiwan Semiconductor Manufacturing Comp., Ltd. (TPE:233028 nm process -- a top third party manufacturer for ARM chips (the mobile industry's dominant architecture), has taped out 16 nm chips.  It's attempting to jump directly from 28 nm to 16 nm.

16 nm is still a larger node than 14 nm, but TSMC didn't seem to have the same problems with its die shrink, opening the floodgates to production with the release of design flows for common chip types (including various ARM CPUs) last month.

On the other hand, Intel's new Bay Trail Atom Z3770 chip faces stiff competition from a 28 nm Samsung Electronics Comp., Ltd. (KSC:005930) produced chip -- the A7 processor Apple, Inc. (AAPL) uses in its new iPhone 5S.  The performance of the A7 shows that while process provides some advantages power and processing wise, those advantages may increasingly be unable to overcome the inherent architectural baggage that x86 brings to the table.  
 

Apple's A7 managed to beat Bay Trail chips, despite a slower clock and larger node size.
 
Samsung's process is 28nm LP (Gate-First high-κ metal gate (HKMG)), while Intel's 22 nm process is Gate-Last HMKG.  This is a huge win for Samsung as it means that it's producing a better chip on a cheaper mature process -- the best possible scenario.  By contrast, Intel's still fresh 22 nm is not only slower -- it also costs Intel more to produce.

In other words, don't bet against Intel's ARM rivals, even if they're a bit behind on process technology.

Source: CNET via Intel



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: RISC vs CISC
By Varun on 10/16/2013 11:12:43 PM , Rating: 5
It was never RISC vs CISC and anyone who thinks it is, has no idea how a modern processor works.

The "baggage" of CISC is a miniscule amount of die area.

No, the real only delay is that Intel only recently started caring about power consumption, and even more recently decided to go after the very low power market. If Tick Tock had come to Atom at the same time it came to the Core series, we wouldn't even need this discussion.

But once again it has nothing to do with CISC vs RISC. Good try though - maybe if you say it enough times it will come true?


RE: RISC vs CISC
By bug77 on 10/17/2013 6:02:38 AM , Rating: 2
quote:
The "baggage" of CISC is a miniscule amount of die area.


Everything is a "miniscule amount of die area" since the GPU has been integrated into the CPU die.
Joking aside, last I checked (and that was 10+ years ago), the overhead of CISC was not about the die area (although that has to burn through some extra power). It was about breaking down instructions into micro-instructions acting as a bottleneck. Also, because Intel and AMD may break down instructions differently, this also means code will behave differently on those platforms (this is strictly about optimizations, not incompatibilities, mind you).
Feel free to correct me if things have changed in the meantime; and I wouldn't be surprised if they did.


RE: RISC vs CISC
By mjv.theory on 10/17/2013 8:42:10 AM , Rating: 2
quote:
It was about breaking down instructions into micro-instructions acting as a bottleneck.


This is indeed the contrast that I was alluding to by the RISC vs CISC theme. But not because of the way that it has historically been perceived. What I was attempting to imply, is that RISC may have advantages in an era of "Purpose Specific" computing, rather than "General Purpose" computing. The rise of GPU compute, contextual co-processors and heterogeneous compute generally may be a sign of the eventual demise of the "CPU".

Ironically for Intel, cheaper die area facilitated by smaller process nodes and a continual push toward low power states might actually be to the eventual advantage of RISC rather than CISC. ARM is already deploying design tools to aid rapid development of custom solutions. Using general purpose cores is fine if the cost savings and performance compromise are acceptable. But if the possibility exists to easily and cheaply design processing "blocks" to compute exactly for your specific purpose with no performance or power compromise, then that is likely to be the preferred route. Effectively, a collection of RISC processing blocks, rather than a monolithic CPU with a large collection of instructions.

To me, neither approach is clearly proven as the best way forward; just commenting and speculating.


RE: RISC vs CISC
By someguy123 on 10/18/2013 1:12:40 AM , Rating: 3
ASICs are nothing new. Specific hardware acceleration is what people are moving away from with general purpose GPU computing. There's no reason at all to waste more space slapping on multiple ASICs for every function when you can decode video, play games and draw webpages on GPU. I'm not even sure where you're getting that idea from considering the popular shift for mobile now is towards HSA.

There will never be a "demise" of CPU because a cpu is significantly faster at linear code, not to mention needs to exist to make the API calls for GPGPU processing. At worst you see their value degrade in multimedia devices with the performance parts left to servers, which already describes the current climate pretty well outside of enthusiast builds and renderfarms.


RE: RISC vs CISC
By Christobevii3 on 10/17/2013 10:12:37 AM , Rating: 1
It seems ARM going with 64bit and adding larger memory pipelines has more to grow and risk power usage growing than intel which is basically already been at 64 bit cpu with 128 bit memory interfaces. Intel just has to optimize current chips, die shrink, and bring the modem into the core, and upgrade their graphics chip and they are good.


RE: RISC vs CISC
By TheJian on 10/17/2013 7:59:42 PM , Rating: 2
Let me know when the integration of a modem and faster gpu don't use any power...ROFL.

Until then, Intel will fight their power just like the other side will deal with memory pipelines and 64bit issues.


"This is about the Internet.  Everything on the Internet is encrypted. This is not a BlackBerry-only issue. If they can't deal with the Internet, they should shut it off." -- RIM co-CEO Michael Lazaridis














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki