Print 62 comment(s) - last by wordsworm.. on Jun 4 at 5:37 PM

Intel announces its “most energy-efficient Intel Core processor” to date

When it comes to processors used in today’s computers (be they laptops, desktops, or servers), Intel remains the king. However, as consumers find themselves increasingly moving away from being tied down to a desktop towards mobile devices, Intel still wants to be at the forefront of innovation when it comes to processor performance and efficiency.
With processors based on ARM architecture clearly dominating in the smartphone and tablet space, Intel is looking to push back heavily starting at the convertible PC level and downward. To show its commitment, Intel is introducing a new Core M processor that is based on the 14nm Broadwell architecture. Intel calls the Core M the “most energy-efficient Intel Core processor” to date, and states that the processor will enable a broad range of thin, lightweight, and more importantly, quiet mobile devices.

Intel's Llama Mountain reference design
Compared to the previous generation Core offerings, the Core M will have a 60 percent lower TDP, 20 to 40 percent better performance, and a 50 percent smaller package footprint.
At Computex, Intel demoed a 2-in-1 device with Core M, codenamed Llama Mountain, which pairs a 12.5” fanless tablet with a detachable keyboard. The tablet itself is just 7.2mm thin, and weighs 1.48 pounds. For comparison’s sake, the recently announced Surface Pro 3 features a 12” display, is 9.1mm thin, and weighs 1.76 pounds.

 Microsoft's Surface Pro 3 is 2.1mm thicker than the Intel reference design

One of the first products to use the new Core M processor is the ASUS Transformer Book T300 Chi which runs Windows 8.1. This convertible PC features a 12.5” IPS display (2560x1440), detachable keyboard, and integrated LTE connectivity.

ASUS Transformer Book T300 Chi
There’s no word yet on availability for the Transformer Book T300 Chi, or other devices that will use the Core M.

Sources: Intel, ASUS

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: Not much to go on here.
By retrospooty on 6/3/2014 11:07:50 AM , Rating: 2
This is where Intel starts to matter. They are a full process node ahead of everyone else and will be going to 14nm when everyone else is struggling with 20nm. This and the next generations after it are going to make some really good x86 Windows tablets that will give ARM a serious run for the money.

RE: Not much to go on here.
By Argon18 on 6/3/14, Rating: 0
RE: Not much to go on here.
By retrospooty on 6/3/2014 11:22:16 AM , Rating: 3
There are 2 reasons for that.

1. Intel CPU's have been too hot and power hungry (until now)
2. Derp... Windows RT, Windows8 - enough said. (Win8.2 and 9 look to fix the "Derp"

So... It's a different landscape. I know you dont want to see that being a completely off the rails MS hater, but its at least to a point where the problems and irritating things are gone. Like it or not, this CPU is alot more powerful than anything ARM can do and is x86 compatible, which runs the software that runs the world.

RE: Not much to go on here.
By Mint on 6/3/2014 12:47:19 PM , Rating: 2
Intel and MS were always focused on the long term.

I'm pretty sure Microsoft's "derp" was to kick off Win8 app development. 200M users quasi-forced to use Win8 apps is a better draw for developers than 500M users mostly ignoring them by turning on the original start menu and forgetting about Metro altogether. Now they're converging on a unified OS core (even if UI differs a bit) to attract developers even more.

And Intel has long done everything it can to keep people buying $100-1000+ processors as opposed to embracing the $30 SoC paradigm. They knew they'd get fanless Core processors eventually. The MacBook Air had already shown Haswell achieved consumption parity to ARM in web browsing per kWh of battery.

At 1.48 lbs, weight is no longer a meaningful issue, especially when you get a 12.5" screen. It's beginning to make less and less sense to buy and carry a premium tablet and laptop as opposed to a 2-in-1, especially when we're seeing dual OS systems (Android/Win8) for those who can't wait for Win8 apps to reach parity.

It's not a done deal yet, but in terms of markets Intel and MS actually care about ($300+ devices), I think they predicted well and made the right business moves.

RE: Not much to go on here.
By retrospooty on 6/3/2014 5:26:04 PM , Rating: 2
"It's not a done deal yet, but in terms of markets Intel and MS actually care about ($300+ devices), I think they predicted well and made the right business moves."

Maybe... Through it all they did both remain incredibly profitable, but I cant help to think about the lost opportunities if they had both been up to the mobile task years earlier... Or even if Intel hadnt sold off thier ARM business a decade ago. I remember having a Tungsten T3 with an Intel StrongARM CPU back int he day, but they sold the whole ARM business unit off to Marvell (If I recall correctly). If Intel had that ARM business we could potentially be on faster ARM chips now.

Meh, maybe its a good thing. This whole mobile boom loosened both MS and Intel's grip on the industry, so we all benefit from the added competition.

RE: Not much to go on here.
By Mint on 6/4/2014 7:27:41 AM , Rating: 2
I think it played out well for everyone.

As you mentioned, we got competition due to MS and Intel letting others win the low end computing market.

Meanwhile, Intel's failure to market BayTrail (especially when running Android) as "good enough" kept Core processor demand mostly intact, and this at least slowed down the oft-predicted "end of PC era". MS also furthered this goal by bringing multitouch to PCs.

Smartphones are a different story, though. Definitely lost opportunities there.

RE: Not much to go on here.
By inighthawki on 6/3/2014 12:15:43 PM , Rating: 2
Really? Because my impression has been that Intel has been making great strides in mobile. Sure there aren't a *lot* of phones yet, but there are quite a large number of Intel Bay Trail tablets.

RE: Not much to go on here.
By retrospooty on 6/3/2014 12:28:40 PM , Rating: 3
Yup, but this blows bay trail away. This is where you start to get Intel Core CPU performance on a tablet instead of stripped down Atom, so all the better.

RE: Not much to go on here.
By inighthawki on 6/3/2014 1:01:57 PM , Rating: 2
Yep! My point was more along the fact that Intel does have [actually quite a large] market penetration in mobile, but that is a good point to bring up too. In addition to now being competitive in power consumption, their devices are going to be an order of magnitude faster. I'm looking forward to seeing these devices in use.

RE: Not much to go on here.
By BRB29 on 6/3/2014 1:06:23 PM , Rating: 2
I just celebrated shipping ~40M units. That's not really penetration.

RE: Not much to go on here.
By retrospooty on 6/3/2014 1:11:03 PM , Rating: 2
There is a lot of fine lines and it depends on how things are counted.

There are laptops, there are convertibles like Surface 2 and Yoga 2 and there are tablets... and 40 million of anything is nothing to sneeze at. Like I said on the other post, today is different. We had power hungry/hot running Intel chips and Windows 8 and RT to deal with. Now we have the major issues on both of those things resolved.

RE: Not much to go on here.
By FITCamaro on 6/3/2014 2:13:27 PM , Rating: 2
But at a far higher price. Bay Trail tablets will still be around in the $200-400 tablet range. These will likely be in tablets much more expensive. Because the performance is much greater.

RE: Not much to go on here.
By retrospooty on 6/3/2014 2:28:03 PM , Rating: 2
Yup... But but just the same tablet with a better CPU, I would expect to see this in higher end tablets and convertibles.

That Asus convertible pictured above looks really sweet.

RE: Not much to go on here.
By retrospooty on 6/3/2014 2:28:51 PM , Rating: 2
derp... "but not" Not "but but" LOL

RE: Not much to go on here.
By bug77 on 6/3/2014 11:43:05 AM , Rating: 2
Unfortunately for intel, that is not a good long-term strategy. I don't know the actual limit for the physical size of a transistor, but it's somewhere around 1-5nm (that's nanometers, not nautical miles). Unless manufacturers are going to go 10nm, 9.9nm, 9.8nm, 9.85nm, , 9.849nm, and so on, that race will end within a decade. Unless intel manages to take the lead in whatever will replace silicon, they're going to have a problem.
I'm not saying they're doomed, they probably knew this before I did. It's just that pretty soon the market may be turned upside down again. Then again when wasn't the tech market super-exciting to watch?

RE: Not much to go on here.
By Khato on 6/3/2014 12:02:03 PM , Rating: 2
Actually it still has the distinct possibility of working for the simple fact that others might drop out of the race. Especially if EUV or some other patterning technique doesn't come along since the costs of going below a certain point may be too much for the foundry model. Most projections have the cost per transistor at the foundries going up after 28nm, marginally at first, but if that trend continues it's not exactly sustainable.

RE: Not much to go on here.
By Khenglish on 6/3/2014 3:13:29 PM , Rating: 2
Unfortunately for smaller processes, manufacturing is a smaller problem than physics of how a 10nm or smaller transistor can even work.

Here's the #1 problem:

Carrier mobility (mu in the graph) is linearly related to how much current a FET can push. If you have half the mobility, then you have half the current. 10^18 dopant atoms per cm^3 already has a very large carrier mobility drop and is considered to be a high dopant level. Let's do the math on how many dopant atoms the body of a 10nm FET will have:

volume of transistor body: 10nm^3 = 10^-24 m^3
dopant atom concentration: 10^18atoms/cm^3 = 10^24 atoms/m^3

transistor body volume * dopant concentration:
10^-24 m^3 * 10^24 atoms/m^3 = 1 atom

yeah... 1 atom in the entire transistor body. This is not enough to make defined junctions between the body and the source and drain. To make matters worse, FETs only conduct current in a thin channel near the gate, so if we try doping a 10nm FET at 10^18 atoms/cm^3, the transistor is too small to have any dopant atoms in the channel . You could step up to a 10^19 atom/cm^3 dopant concentration, but then you just cut your transistor conductivity roughly in half, and that is still a low dopant count which will result in very high leakage.

As for FINFETs, all the math is identical. They do not fix this problem.

Oh and people may argue with me that the #1 problem is actually wire resistance. That gets worse linearly with every die shrink too.

RE: Not much to go on here.
By mik123 on 6/3/2014 4:46:35 PM , Rating: 2
Actually, 10nm process is considered feasible, and major players are already talking about 7nm:

I do think however that 7nm or 5nm will be the last Si based process.

Good news is there's no shortage of alternative device technologies/materials (e.g. graphene looking good).

And of course, once they stop scaling they will start stacking, so Moore's Law is not in danger any time soon.

RE: Not much to go on here.
By Khenglish on 6/3/2014 8:24:25 PM , Rating: 2
I hope they figured out some workaround. Unfortunately I'm worried that they made out roadmaps for 7nm without trying to make the physics for the device work yet.

As for 10nm and under, I completely believe that the parts can physically be made and functional, but that the devices will be slower than 14nm devices.

As for graphine, yield is utterly impossible. You literally cannot have a single atom of variance. If you do that location of your semiconductor becomes a permanent conductor, or a permanent insulator. It is impractical for making a multi-billion transistor processor.

What could work is taking a FET and pulling the gate oxide turning it into a lateral BJT. My college (RPI) has simulated 32nm lateral BJTs at 1.3THz, which is 5 times faster than a 22nm FinFET. People refuse to look at this though since they hear "BJT" and think "that's old we won't even look at it" despite it being a completely different design from the old vertical BJTs. Going the BJT route doesn't solve the scaling problem though, it just immediately offers a faster device.

Going BJT though makes keeping current down problematic. While a single transistor is power competitive with a FET, chains are not without care. When having a chain of logic, the early logic needs to be low current since the current is multiplied by over 100 (beta) at each transistor. This can be used to save power since early logic can now be low power with the final logic meeting the current requirement, but it's still a new concern.

RE: Not much to go on here.
By Khenglish on 6/3/2014 8:35:58 PM , Rating: 2
I forgot about 3D:

3D is fun stuff and offers huge performance improvements. You can vastly shorten your interconnect lengths and integrate memory on-chip with orders of magnitude higher bandwidth than you can by going to even a modern L1 cache.

There's 1 big problem with 3D though, and that's heat density. If you take logic that has an area A, and stack it in 3 layers ontop of itself your logic now has 3 times the heat density. A 22nm CPU without an overclock already has about a 30C temperature differential between the die and heatsink due to how small the high-power area of the cores are. Stack the logic on itself 3 times and you now have 90C just across the thermal interface. A big part of this is because for some reason we still use Silicon Dioxide to physically protect the processor which has around 1/100th the conductivity of copper, but stacking only makes the heat density problem worse.

Overall I think it is a good idea to do 3D. There are ways to bond the wafers with very high yield. As with everything though, there's always a drawback. This problem should be fixable by ditching SiO2 as the processor's protective coating for something more thermally conductive.

RE: Not much to go on here.
By mik123 on 6/4/2014 3:04:49 PM , Rating: 2
Regarding the heat problem in 3D, I'm wondering why not just slow down the clock?

For example, what if you could get rid of DRAM entirely, and put a couple of GB of SRAM on die (say 20 layers on top of logic)? This way, the program would load from SSD straight into SRAM on the CPU. You would still want to have a couple of levels of cache, but your main memory accesses would speed up dramatically.

To deal with heat from those 20 layers of SRAM and the CPU, slow the clock to, say, 500 MHz. Sure, the CPU becomes slower, but the main memory is now at least 10 times faster, and system design is simplified.

Also, a slower clock allows to have multiple layers of logic too, so a multilayer CPU can have more transistors: cram more cores, more execution units per core, larger graphics accelerator unit, etc.

Finally, a slower clock allows to build larger, more complex systems, because it's much easier to deal with signal integrity issues.

If 500 MHz sounds like an awfully slow speed, just remember that your brain works pretty well at just 100 Hz.

RE: Not much to go on here.
By inighthawki on 6/3/2014 12:18:50 PM , Rating: 2
Being a step ahead in their fabrication process doesn't always translate to meaning that their advantage is in process size. I'm sure Intel is well under way to developing transistors of different materials with better conductivity properties. I recall reading a while back that they found a material that was almost 10x more power efficient, and faster as well, but it was just expensive to produce.

RE: Not much to go on here.
By retrospooty on 6/3/2014 1:00:17 PM , Rating: 2
Yes, but we aren't speaking generalities, this is specific... This architecture starts with Haswell. Hold Haswell up as the benchmark. This is a die shrink of that + some other micro architecture improvements.

"It seems as though my state-funded math degree has failed me. Let the lashings commence." -- DailyTech Editor-in-Chief Kristopher Kubicki

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki