backtop


Print 47 comment(s) - last by talonvor.. on Oct 23 at 7:50 AM

CEO says next generation 14 nm "tock" architecture refresh (Skylake) will not be delayed

Intel Corp. (INTC), which reported its earnings yesterday, gave investors an unwelcome surprise as well when it announced its first 14 nm chip, codenamed Broadwell, is being delayed by a quarter.

I. A Minor Slip

Intel was supposed to start shipping Broadwell chips to OEMs in Q4 2013 (this quarter) giving them time to integrate the chip into their new notebooks, laptops, desktops, and tablets.  Instead the new chips will start shipping in Q1 2014.

Currently, Intel's premium Core Series-branded Haswell chips are produced on a 22 nm process.  But moving from 22 nm node to 14 nm has proved trickier than Intel expected.

IDF 2013
[Image Source: Jason Mick/DailyTech LLC]

Normally a die shrink involves years of prototyping technologies.  When a production process is mature enough to come out the laboratory, it's installed at the fab.  Equipment must often be updated or replaced to handle the new process once it's moved to a full-scale fab on its final path towards production.  At that point tests runs are executed to study the efficacy of the fab hardware and methodology for the node, typically at lower volumes that the final production runs.

It is typical to find problems with the process at that stage, and it is up to the engineers to institute a set of process method and hardware changes to minimize those defects.  When the fixes are in place, a semiconductor manufacturer crosses its fingers, metaphorically speaking and moves to volume production.
Haswell shot
Intel currently owns 95 percent of the server market (Haswell is pictured).

This approach typically works out for Intel, but this time something went wrong.  The fix failed to minimize the number of wafer defects to the extent expect, and as a result a relatively high percentage of Intel's CPU were turning out dead on arrival.

Intel's new CEO Brian Krzanich insists this is not uncommon stating, "We have confidence the problem is fixed because we have data it is fixed.  This happens sometimes in development phases like this. That's why we moved it a quarter.  We have a strong desire to get Broadwell to market. If I could, there'd be nothing slowing me down. This is a small blip in the schedule, and we'll continue on from here."

In Intel's internal jargon, Broadwell is a so-called "tick" release, which involves simply shrinking the previous processor architecture (Haswell, in this case), while possibly adopting some minor tweaks/improvements based on lessons learned.  By contrast, the next generation after Broadwell will be a new architecture, called a "tock".  That "tock" release -- Skylake – will not be delayed, Mr. Krzanich stated, from its prospective 2015 ship date.

II. Intel Enjoys Healthy Process Lead, but Its Chips are Slower than Samsung's

The delay of Broadwell is a bit of bad news, but it could be much worse; the earnings report for Intel was otherwise better than expected.  And while Intel has yet to establish a strong presence in the smartphone and tablet markets (unlike the server and traditional PC markets that it dominates), it is starting to attract interest in the mobile space, thanks in part to its process lead and the intrinsic power savings that lead provides.

Intel 14 nm fab
Intel is producing 14 nm chips at three key fab facilities. [Image Source: Intel]

Despite the setback, that lead remains relatively large.  International Business Machines, Inc. (IBM) in August soft launched 22 nm Power8 architecture high-end server chips and licensable architecture.  However, it's still working with partners to try to bring that chip to the market in physical form.  Taiwan Semiconductor Manufacturing Comp., Ltd. (TPE:233028 nm process -- a top third party manufacturer for ARM chips (the mobile industry's dominant architecture), has taped out 16 nm chips.  It's attempting to jump directly from 28 nm to 16 nm.

16 nm is still a larger node than 14 nm, but TSMC didn't seem to have the same problems with its die shrink, opening the floodgates to production with the release of design flows for common chip types (including various ARM CPUs) last month.

On the other hand, Intel's new Bay Trail Atom Z3770 chip faces stiff competition from a 28 nm Samsung Electronics Comp., Ltd. (KSC:005930) produced chip -- the A7 processor Apple, Inc. (AAPL) uses in its new iPhone 5S.  The performance of the A7 shows that while process provides some advantages power and processing wise, those advantages may increasingly be unable to overcome the inherent architectural baggage that x86 brings to the table.  
 

Apple's A7 managed to beat Bay Trail chips, despite a slower clock and larger node size.
 
Samsung's process is 28nm LP (Gate-First high-κ metal gate (HKMG)), while Intel's 22 nm process is Gate-Last HMKG.  This is a huge win for Samsung as it means that it's producing a better chip on a cheaper mature process -- the best possible scenario.  By contrast, Intel's still fresh 22 nm is not only slower -- it also costs Intel more to produce.

In other words, don't bet against Intel's ARM rivals, even if they're a bit behind on process technology.

Source: CNET via Intel



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RISC vs CISC
By mjv.theory on 10/16/2013 6:42:45 PM , Rating: 2
Intel currently leads on process node, which may help for now, but process is a limited game to play. We are already seeing function specific, or contextual, processors (Moto X for example). "General purpose" computing once seemed the obvious answer, but with ARM (and don't forget MIPS), function specific (once the domain of FPGA) is increasingly plausible. Will x86 also be able to join that fray?.

Remember, the argument is not really Intel vs ARM,
it is RISC vs CISC.




RE: RISC vs CISC
By Varun on 10/16/2013 11:12:43 PM , Rating: 5
It was never RISC vs CISC and anyone who thinks it is, has no idea how a modern processor works.

The "baggage" of CISC is a miniscule amount of die area.

No, the real only delay is that Intel only recently started caring about power consumption, and even more recently decided to go after the very low power market. If Tick Tock had come to Atom at the same time it came to the Core series, we wouldn't even need this discussion.

But once again it has nothing to do with CISC vs RISC. Good try though - maybe if you say it enough times it will come true?


RE: RISC vs CISC
By bug77 on 10/17/2013 6:02:38 AM , Rating: 2
quote:
The "baggage" of CISC is a miniscule amount of die area.


Everything is a "miniscule amount of die area" since the GPU has been integrated into the CPU die.
Joking aside, last I checked (and that was 10+ years ago), the overhead of CISC was not about the die area (although that has to burn through some extra power). It was about breaking down instructions into micro-instructions acting as a bottleneck. Also, because Intel and AMD may break down instructions differently, this also means code will behave differently on those platforms (this is strictly about optimizations, not incompatibilities, mind you).
Feel free to correct me if things have changed in the meantime; and I wouldn't be surprised if they did.


RE: RISC vs CISC
By mjv.theory on 10/17/2013 8:42:10 AM , Rating: 2
quote:
It was about breaking down instructions into micro-instructions acting as a bottleneck.


This is indeed the contrast that I was alluding to by the RISC vs CISC theme. But not because of the way that it has historically been perceived. What I was attempting to imply, is that RISC may have advantages in an era of "Purpose Specific" computing, rather than "General Purpose" computing. The rise of GPU compute, contextual co-processors and heterogeneous compute generally may be a sign of the eventual demise of the "CPU".

Ironically for Intel, cheaper die area facilitated by smaller process nodes and a continual push toward low power states might actually be to the eventual advantage of RISC rather than CISC. ARM is already deploying design tools to aid rapid development of custom solutions. Using general purpose cores is fine if the cost savings and performance compromise are acceptable. But if the possibility exists to easily and cheaply design processing "blocks" to compute exactly for your specific purpose with no performance or power compromise, then that is likely to be the preferred route. Effectively, a collection of RISC processing blocks, rather than a monolithic CPU with a large collection of instructions.

To me, neither approach is clearly proven as the best way forward; just commenting and speculating.


RE: RISC vs CISC
By someguy123 on 10/18/2013 1:12:40 AM , Rating: 3
ASICs are nothing new. Specific hardware acceleration is what people are moving away from with general purpose GPU computing. There's no reason at all to waste more space slapping on multiple ASICs for every function when you can decode video, play games and draw webpages on GPU. I'm not even sure where you're getting that idea from considering the popular shift for mobile now is towards HSA.

There will never be a "demise" of CPU because a cpu is significantly faster at linear code, not to mention needs to exist to make the API calls for GPGPU processing. At worst you see their value degrade in multimedia devices with the performance parts left to servers, which already describes the current climate pretty well outside of enthusiast builds and renderfarms.


RE: RISC vs CISC
By Christobevii3 on 10/17/2013 10:12:37 AM , Rating: 1
It seems ARM going with 64bit and adding larger memory pipelines has more to grow and risk power usage growing than intel which is basically already been at 64 bit cpu with 128 bit memory interfaces. Intel just has to optimize current chips, die shrink, and bring the modem into the core, and upgrade their graphics chip and they are good.


RE: RISC vs CISC
By TheJian on 10/17/2013 7:59:42 PM , Rating: 2
Let me know when the integration of a modem and faster gpu don't use any power...ROFL.

Until then, Intel will fight their power just like the other side will deal with memory pipelines and 64bit issues.


RE: RISC vs CISC
By michael2k on 10/17/13, Rating: -1
RE: RISC vs CISC
By ritualm on 10/17/2013 8:32:33 PM , Rating: 5
When is Samsung's 1xnm FinFET process node coming online? You haven't heard a peep about it.

When is TSMC's own 1xnm process node coming online? In a year? Except we already have a long record of TSMC under-delivering on their promises. AMD and Nvidia both build their GPUs on TSMC's fabs. Both of them got hit badly at 55nm, at 40nm, at the failed 32nm, and yet again at 28nm.

TSMC wants to jump from 28nm to 1xnm directly in a few years. Naw, give it a few more years. Likewise, Samsung's isn't ready as early as you'd like to think it is.

If Intel has problems getting good yields at 14nm and is admitting it in public, it means both TSMC and Samsung are painting a rosier picture regarding their own 1xnm tech than what they should be.

Unfortunately, you are an Apple shill, and you'd take every opportunity to undermine anything its competitors are doing in any field. That automatically renders your opinions irrelevant.


RE: RISC vs CISC
By michael2k on 10/21/2013 1:43:31 PM , Rating: 2
Yeah, Intel is planning on taking the process lead advantage, I understand that.

But for the last 7 years, it didn't have that.

I wasn't incorrect in stating that. This year is the first time they are ahead of TSMC, Samsung, et al, by shipping a 22nm Bay Trail part, when they are currently at 28nm.


RE: RISC vs CISC
By KurgSmash on 10/18/2013 1:40:41 AM , Rating: 3
That battle is mostly over, CISC won. It won in the desktop, it won in the server, and it's even likely to win in most of HPC. IBM's still kicking around but they're very, very specialized.

For the most point it's a silly old battle anyway, CISC vs. RISC isn't all that important any more, it's just who has the more clever designers and the best process.


Doom and gloom?
By brianbrain on 10/16/2013 6:02:28 PM , Rating: 5
I love how Intel missing their target by one quarter (when was the last time they missed a target, by the way?) is causing the entire tech community to gasp and declare Intel (and x86 along with it) doomed.

TSMC is notorious for having process and associated yield problems, and so is Samsung compared to Intel. There is absolutely no doubt in my mind that you are going to start hearing stories about how TSMC is "struggling" at 16nm. They have been and continue to struggle with high yields at 28nm, meanwhile Intel is in full production on their 22nm node.

As far as everyone wanting to declare "x86 dead" - it's not. Far from it. A look at any number of benchmarks comparing Intel's latest mobile offerings to ARM's shows a recurring pattern: x86 is gaining more performance per watt with each product cycle than ARM is. Intel is scaling "down" faster than ARM can scale "up", and in ARM's race to scale up at all costs, it is losing the performance per watt advantage.

Also, regarding Apple's A7 "beating" Intel Bay Trail..uhm, what? Incredibly wrong, unless you're talking about graphics performance...which we're not. When it comes to pure CPU performance, Bay Trail mops the floor with the A7 - however when it comes to the graphics portion of the SOC, the A7 takes Bay Trail down in kind. Intel's made HUGE strides in graphics performance over the past few years however, and I would imagine that within the next 2 years, we'll start seeing some of the fastest mobile graphics coming out of Intel, right along side Qualcomm and Imagination Tech (Apple's A7 uses Imagination Tech's GPUs).

That aside, Bay Trail is a fantastic SOC, and is being sold at a great price too. This is a perfect SOC for tablets of all kinds, as its CPU performance per watt is unmatched in the high performance mobile sector. Granted, Intel doesn't have a SOC that is going to take down ARM in the mobile phone race, but ARM doesn't have a SOC or CPU that will take down Intel in the server/desktop market either. The battleground is in the tablet space, and it is heating up *fast*. If Intel is able to dominate the tablet space after ARM's HUGE head start, then watch out - mobile phones are next.

2014-2016 are looking like *very* interesting years.




RE: Doom and gloom?
By augiem on 10/16/2013 6:52:54 PM , Rating: 5
quote:
I love how Intel missing their target by one quarter (when was the last time they missed a target, by the way?) is causing the entire tech community to gasp and declare Intel (and x86 along with it) doomed.


Hyperbolic sensationalist click-baiting is the new "journalism". Two cherry-picked data points and a straight line surely spells DOOM or world domination. Teh interwebs gets dumber every day.


RE: Doom and gloom?
By PrinceGaz on 10/17/2013 4:32:26 PM , Rating: 2
It is story worth reporting, if nothing else for people looking to next-gen chips in 2014. There is a lot of rubbish on the interweb, but reading my news on it certainly hasn't made me dum.


end-user dates
By willhsmit on 10/16/2013 5:31:49 PM , Rating: 1
Does anyone know/guess what this means for release dates in the retail market? I assume that Broadwell was not coming out before 2014 in the retail market anyway, how much does this delay it?




RE: end-user dates
By stadisticado on 10/16/2013 5:45:52 PM , Rating: 2
Well considering that Haswell laptops are just now entering the market, I'm actually not seeing how much this will impact actual market entry for Broadwell.


RE: end-user dates
By boeush on 10/16/2013 6:06:05 PM , Rating: 2
If the chips will start to ship in Q1 2014, then give OEMs one more quarter to build inventories, finalize designs, and test, then yet another quarter to start building and shipping upgraded designs... Seems to me, Q3 2014 is when we should be expecting Broadwell to start showing up in retail systems. That puts it pretty much 1 year after Haswell entered retail.


RE: end-user dates
By willhsmit on 10/16/2013 7:27:00 PM , Rating: 2
Yeah, about a year after Haswell was what I was expecting, so wondering if this meant a quarter delay from that.

By retail I mean retail processors for home-build desktops, but I assume they'll hold that until the integrators are ready.


RE: end-user dates
By boeush on 10/17/2013 2:15:59 PM , Rating: 2
The way my tea leaves read, it's probably going to be more like a shift from start of Q3 to end of Q3, but basically by fall 2014 Broadwell should be shipping in new models. My gut says stand-alone retail CPUs probably won't be available until Q4 2014...


RE: end-user dates
By 10bpc on 10/19/2013 4:27:40 PM , Rating: 2
As a matter of fact, there is such a disconnect between INTC & end users that innovations go under the radar. A typical example is section 2.6 of the Haswell datasheets vol 1 that was never clear about color depth at any resolution, despite many mentions of deep color in various documentations. Meanwhile, the Windows/Linux/Apple stock drivers did not offer a choice over 32bpp (8 bits per color & transparency), so even if the hardware is capable, none of the manufacturers know or tested. The public can't question design engineers directly and distributors can't find the info. Try support and ask them about 10bpc on Iris Graphics if you want to have fun ! The average consumer will probably just wait with the money in the bank, until the info is available and no 8K is updated with SEC to explain that.
You have a market inundated with 4k displays and the smartphones getting there, so with the 8k emergence, I hope Westlake graphics allows 4320p at the minimum 10bpc mandated by BT.2020 a while back... Not ready to give up the x86 ecosystem and I am not color blind yet.


Seriously ?
By Hector2 on 10/17/2013 12:31:58 PM , Rating: 3
Intel has always had aggressive targets and, if you look back over the years, it's not at all unusual for Intel to delay a quarter. But is the sky falling ? No. Intel never releases a new process to production until it can sustain the much bigger volumes & quality needed for production shipments.

As for TSMC, the author states that they "didn't seem to have the same problems with its die shrink, opening the floodgates to production with the release of design flows". Really ? That's naive.

You really don't even need to have a process to "release" a "design flow" or do a tapeout. Those are CAD exercises, not a physical chip.

Intel started releasing design flows and doing tapeouts internally for 14nm a couple of years ago ! It's a lot harder to produce high yielding, physical 14nm ICs than drawing a 14nm transistor on an LCD screen.

If we haven't heard yet about TSMC's 16nm yield problems, it's simply because they haven't gotten that far yet. Give it another year or 2 and we'll hear plenty.




RE: Seriously ?
By Belegost on 10/17/2013 1:00:16 PM , Rating: 3
As someone who has to work with TSMC as our fab, they had a lot of problems with yields at 28nm. The difference is that TSMC just keeps going and their customers have to workaround their failures.


By superstition on 10/16/2013 10:29:51 PM , Rating: 2
quote:
In Apple's word, it is 64 bit (who cares).


"Samsung's Upcoming Galaxy Smartphones to have 64-Bit Processors" -- DailyTech headline


By talonvor on 10/23/2013 7:50:55 AM , Rating: 2
It was only a matter of time, after all 32 bit has a serious problem in the memory department. The only logical solution is the move to 64 bit. So, regardless of who did it first, every single phone maker on the planet was heading that way eventually. Apple just forced them all to make the move a bit early.


x86
By Yojimbo on 10/16/2013 11:25:24 PM , Rating: 2
How can you justify claiming that "ARM is beating x86" and that x86 "brings inherent architectural baggage to the table" just based on the fact that A7 is faster than Bay Trail (which I thought was a platform, and not a chip)? Note the performance improvements that A7 achieved over the stock ARM processors when it came out. Perhaps the Intel processor is similarly unoptimized as those ARM processors. How can we take one example of chips using each instruction set and, based on the performance comparison of that example, conclude that the one instruction set is superior to the other? First, note that Apple and Samsung have been continuously in this sector longer than Intel. More importantly, I don't see how we could dare to extrapolate more than saying that whatever the underlying architecture of A7 is, is superior to the Silvermont architecture underlying the Intel chip.




RE: x86
By Nagorak on 10/17/2013 5:22:00 PM , Rating: 2
It's also worth noting that Apple has a lot of money. Is it really a surprise they were able to produce a really good chip? They certainly weren't lacking the resources to do it. Qualcomm and Samsung's ARM chips don't stack up well to A7 either.

So really this just means that Apple has the fastest chip. But, at the same time, no one else was going to get in Apple products anyway, and a slight performance difference is not going to be the deciding factor between someone going iOS vs Android vs Windows. At best it's one factor of many, and price is probably a more important consideration.


By philosofa on 10/16/2013 4:44:29 PM , Rating: 2
I do wonder about whether process tech will be able to keep up or not, so far Intel are doing pretty well but the cracks are showing a bit?

You may also want to look again at the caption of your last pic ;)




Glad to hear
By CaedenV on 10/16/2013 11:25:28 PM , Rating: 2
Glad to hear that Skylake is not delayed as that will likely be my next upgrade from my current Sandy Bridge setup.

But it is going to be an upgrade for the sake of things like DDR4, PCIe4, and SATA Express (and hopefully onboard 10GbE?) which are going to prompt me to upgrade. I am sure that the CPU will suck down less power, give a nice 20% performance boost, but honestly and truly there are very few workflows in my system where the CPU is anywhere near becoming the bottleneck. For gaming the bottleneck is in my GPU (more specifically the 1GB of ram on the GPU). For video editing the bottleneck is still generally on the HDDs and SSDs. For web browsing the bottleneck is still on the internet connection. For office work... well I suppose I am the bottleneck there. But the point is that unless you are doing high end 3D modeling, or massively parallel work, then the CPU is very unlikely to be where the limiting factor of the system, and that is not going to change in the next 2-3 years (unless next gen games start supporting more than 4 cores but continue to not support HT).

I do however worry about Skymont (now Cannonlake?). That transition down to 10nm is simply going to be rough.




nice one
By abhishek0990 on 10/17/2013 4:43:58 AM , Rating: 1
Very useful information. Thank you for sharing it. Thanks




RISC vs CISC
By mjv.theory on 10/16/13, Rating: 0
Intel is now the underdog
By Shig on 10/16/13, Rating: -1
RE: Intel is now the underdog
By kingmotley on 10/16/2013 6:09:18 PM , Rating: 4
I wouldn't go so far as to say that x86 is losing, but more so that ARM is catching up pretty quickly. ARM still has a way to go before they can seriously make dents into the x86 market. For example, the demand for ARM based laptops is fairly negligible, but for a phone or tablet they are great. They even do a half respectable server, but that requires server software be portable enough to run on the ARM architecture, which a lot of it currently isn't.


RE: Intel is now the underdog
By bug77 on 10/16/2013 6:51:52 PM , Rating: 1
x86 has been "loosing" for a while. Its complex instructions have been the best solution in an age where computing power was more limited and compilers were limiting the ability to write code in simple instructions.
Times have changed. Compilers of today can work wonders. The rise of mobile brought us power as a limiting factor. So far intel has been mitigating those with an enormous library of existing software and a lead in manufacturing process. But that won't be sustainable forever.
x86 is already stuff of legend. It's been with us for almost 40 years, it has brought computing to our houses. In fact, I'm pretty sure ARM does (did?) development on x86 platforms.
I say let's thank to everybody involved for pushing the envelope and let the best platform won.


RE: Intel is now the underdog
By CaedenV on 10/16/2013 10:50:10 PM , Rating: 1
On the contrary, I am seeing the exact oposite thing happening. As mobile devices become more and more useful there is more and more need for a return of complex instructions. This is inflating the ARM feature set and it is not scaling well with power. Intel chips are already vastly more efficient (work/power), and they are getting their power usage down very quickly while ARM is rising in power usage without gaining much returns in efficiency.

I am not saying that Intel is not going to hit a wall, because once they get down to a sub 10nm die then there is a very real wall that will be hit. But what I am saying is that ARM is going to hit that exact same wall, and they will not be nearly as prepared for it.

At the end of the day both x86 and ARM are loosers. x86 cannot be cut down small enough (well... there is quark), and ARM will never be efficient enough. Once the die shrink wall is hit then something else will need to be done. Be it more efficient and flexible instructions, or moving from a binary to a trinary processor (we are already seeing it in storage), but something drastic is going to have to change within the next 10 years and nither platform will survive the transition. Intel has the budget and R&D to make this 'something new' that may come... but Sammy and Arm do not.


RE: Intel is now the underdog
By purerice on 10/17/2013 12:25:16 AM , Rating: 3
The RISC vs CISC debate may never end and as you suggest, new technology may be required that leaves both in the dust.

RISC has several design (as opposed to performance) advantages over CISC but for the longest time the market for RISC was limited to Apple, some consoles, and some embedded platforms. The lack of market hampered RISC more than inherent advantages helped. With little funding available IBM and ARM became the sole developers of RISC processors. In 1997 Motorola planned a 2ghz CPU for launch in 3 years. Those chip designers walked to Intel and helped create Netbu(r)st for Intel.

CISC developers largely ignored the smartphone space and that is what gave RISC the opportunity to finally flex its muscles.

My Merom-powered desktop was about 30x the power of the original iPhone. The new Haswell powered version of my desktop is about 4x the power of the iPhone 5s. If (and only if) that level of relative improvement continues over the next 6 1/2 years, the iPhone of 2020 will be about 2x the speed of the typical Intel desktop. Until recently, Intel did not view RISC as a competitor to CISC and worked just hard enough to stay ahead of AMD. With RISC reemerging as a performance competitor to Intel's chips, perhaps Intel will redouble development efforts.


RE: Intel is now the underdog
By inighthawki on 10/17/2013 10:12:09 PM , Rating: 2
You're forgetting that 1) For the past three generations, Intel has barely focused on improving performance at all (Haswell is almost identical to ivy bridge, and ivy bridge was only about 10% over sandy bridge), and 2) It's easier to have the appearance of "catching up" so quickly when when you're using modern technology to produce a 5 year old chip.


RE: Intel is now the underdog
By inighthawki on 10/16/2013 8:57:12 PM , Rating: 4
Uhuh... and the year of Linux is upon us as well, right?


RE: Intel is now the underdog
By Flunk on 10/17/2013 12:08:07 AM , Rating: 2
Yes, it's shipping on the majority of smartphones and tablets this year. It might not be Linux on the Desktop (although Chromebooks are that) but Android is making Linux more successful than ever before.


RE: Intel is now the underdog
By Samus on 10/16/2013 9:22:10 PM , Rating: 1
Shig, you can't underestimate Intel. They are larger than Samsung and TSMC combined.

Samsung has to sell 10 high-end ARM CPU's to equal the profit of ONE low-end Intel Xeon.


RE: Intel is now the underdog
By augiem on 10/16/2013 11:20:40 PM , Rating: 2
quote:
They are larger than Samsung and TSMC combined.


Fact Check!

Samsung: http://en.wikipedia.org/wiki/Samsung
Revenue: US$ 268.8 billion (FY 2012)[1]
Net income: US$ 26.2 billion(FY 2012)[1]
Total assets: US$ 470.2 billion (FY 2012)[1]
Total equity: US$ 209.5 billion (FY 2012)[1]
Employees: 425,000 (FY 2012)[1]

Intel: http://en.wikipedia.org/wiki/Intel
Revenue: US$ 53.34 billion (2012)[2]
Operating income: US$ 14.63 billion (2012)[2]
Net income: US$ 11.00 billion (2012)[2]
Total assets: US$ 84.35 billion (2012)[2]
Total equity: US$ 51.20 billion (2012)[2]
Employees: 104,700 (2012)[2]

Samsung is a LOT bigger than Intel, they just happen to be spread out over a vast array of topics.


RE: Intel is now the underdog
By yvizel on 10/17/2013 9:59:50 AM , Rating: 2
I think he meant their fabs and production capabilities...?

It's obvious that Samsung is "bigger" as they are a consumer company (making TVs, notebooks, phones, tablets, kitchen equipment etc etc etc.)


RE: Intel is now the underdog
By Nagorak on 10/17/2013 5:35:25 PM , Rating: 2
I'm not sure that comparing Samsung to Intel was really on target to begin with. Samsung is a huge conglomerate. They are competing with LG not just in things like phones, but also washing machines and televisions. They're competing with Apple in phones and tablets. They're sort of competing with Intel on processors, yet some of their tablets use Intel processors (and going forward more probably will).

Samsung is a completely different beast than Intel. But when it comes to straight up processors and fabs, I'd say Intel has the advantage. Samsung has a lot of different product lines to support. That's why their net income is only about twice Intel's even though their revenue is five times as high.

Still, it's technically true that Samsung is larger than Intel.


RE: Intel is now the underdog
By ritualm on 10/17/2013 8:41:22 PM , Rating: 2
Intel doesn't make TVs, white goods i.e. "smart" fridges, and computers, so it isn't even a valid comparison to Samsung.

You can have Samsung buyout both TSMC and Global Foundries, and still have less total fab capacity combined than Intel all by itself.


RE: Intel is now the underdog
By AntDX316 on 10/17/2013 7:22:19 AM , Rating: 2
it has nothing to do with the processor really

it's the OLED screen no one else has

the black levels of the S4 r so DEEP it's basically off

you cannot get black levels that dark on any other panel

dimming the white levels to make the black darker is cheating and doesn't look right on a scene shifting brights

the white levels are so bright at night in pitch black the samsung startup logo burned my eyes and I couldn't see anything black backlight bleed at all

it's good they are funding their business because OLED TV's and monitors at a low cost would be fantastic


RE: Intel is now the underdog
By inighthawki on 10/17/2013 10:14:17 PM , Rating: 2
quote:
you cannot get black levels that dark on any other panel
Well duh, it's an OLED panel. You can't get any darker than "off"


RE: Intel is now the underdog
By jeffbui on 10/18/2013 1:25:21 PM , Rating: 2
I'm so confused. How did OLEDs contributing to fantastic black levels of the S4 relate to any topic of these threads?


"People Don't Respect Confidentiality in This Industry" -- Sony Computer Entertainment of America President and CEO Jack Tretton














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki