Print 90 comment(s) - last by EasyC.. on Jun 6 at 11:39 AM

Intel has big ambitions for its low power Oak Trail Atom-based platform, which it says will trash ARM processors in Android performance.  (Source: Intel)

Sadly for Intel quite the opposite proved true in early benchmarks. ARM badly beat an Oak Trail prototype in app performance and heat.  (Source:

ASUSTek's Eee Transformer Pad, powered by NVIDIA's dual-core Tegra 2 ARM CPU proved the most powerful tablet in most benchmarks.  (Source: Android In)
The only benchmark Intel's new platform performed admirably in was Javascript performance

Intel Corp.'s (INTC) was quick to brag on dramatic process improvements that would propel its Atom chips to new levels of performance during its keynote at Computex 2011 in Taiwan.  The company says it will leverage its die shrink lead on Core brand CPUs to push yearly die shrinks for Atom over the next couple years, hitting the 14 nm node a couple years before ARM manufacturers.  And it says it will deploy its new tri-gate transistors at the Atom's 22 nm node in 2013.

I. Intel Oak Trail Gets Tested

By the looks of early testing, Intel desperately needs all the help it can get.  A dual core Z6xx series atom chip running on the company's new Oak Trail chipset was shown off in a prototype design by Taiwan's Compal Electronics.

The prototype packed two CPU cores, running at 1.5 GHz.  It also packed an Intel GMA600 GPU, which is essentially a rebranded PowerVR SGX535.

The new tablet was running Google Inc.'s (GOOG) popular Android 3.0 "Honeycomb" operating system, the second most used tablet OS in the world behind Apple, Inc.'s (AAPL) iOS (found on the iPad and iPad 2).

In a limited set of tests,, a Dutch hardware site benchmarked [translated] the new platform and compared it to rivals currently on the market with similarly clocked dual-core CPUs.  The picture wasn't pretty for Intel.

II. Slow

In the Caffeine 3 benchmark, the Oak Trail prototype scored a dismal 1562 points, well behind the Asus Eee Transformer Pad (Tegra 2 based; 6246 points) and the Samsung 10.1v Galaxy Tab (Hummingbird Gen. 2; 7194 points).  This was significant as Caffeine measures Java performance -- the language most Android apps are written in.  As such, the benchmark provides a key indicator of how fast apps will run on the tablet -- in Intel's case "very slow".

That result was confirmed by the Linpack benchmark, which gave a result at 9.4 MFLOPs, versus 36 MFLOPS for the Tegra 2.  Similarly the Quadrant benchmark gave a score of 1978, at the very bottom of the 2,000 to 2,500 that Android tablets regularly score.  Some Android Phones even score 2,000+. 

While these numbers aren't necessarily a bad thing for all apps (some of which are less demanding), it may mean that on Intel-based Android tablets you'll have to forgo highly demanding apps like the early crop of 3D shooter titles.

The Oak Trail tablet did show some promise, posting the best score (1500 ms) in the Sunspider benchmark, a full 376 ms faster than the fastest ARM-based Android, the Asus Eee Transformer Pad.  In other words, while Intel's platform may come up short in apps, it looks like it will handle the internet pretty well.

III. Hot

Unfortunately, two critical performance measures -- Flash performance and battery life -- were not tested.

The site did evaluate Oak Trail's temperature performance, writing [translated]:

The settings menu of the x86 port also showed how hot the Intel CPU in the tablet. In this model ranged between 60 and 65 degrees [Celsius], and that was quite obvious. The tablet on the outside felt warm, much warmer than previous Honeycomb Tablets we owned had.

Unfortunately the site did not produce any quantitative numbers to back its claims.  However, if the CPU is truly reaching 140-149 °F, that's a major issue as, at that temperature, heat conduction could make holding the case very uncomfortable (particularly given the tight casing in modern ultra-slender tablets).

IV. Hope for Intel?

There's hope on both the performance and temperature front for Intel.  It's thought that a major part of the gap in app performance may be due to optimizations in Android for the ARM architecture.  If Intel pushes hard enough, it may be able to get similar optimizations for x86 worked in.

The temperature is intimately tied to usage and clock speed, so there's no way of necessarily escaping that during times of heavy use.  However, Intel could always solve this problem by putting a small fan in its tablets.  While that might produce a "fatter" less seemly tablet, it would at least spare the user from discomfort.

And in the long term, the die shrink in Q4 2011 to 32 nm should reduce chip temperatures.

The early numbers do indicate, though, that Oak Trail and Atom-powered Android is a work in progress -- a picture that stands in sharp contrast to Intel's promise that Oak Trail would trash ARM designs in performance.  Once we get numbers on battery life we should be able to see exactly how far behind the platform is.

The Tegra 2 is a dual-core Android by American-based NVIDIA Corp. (NVDA).  The processors are overclocked to around 1.5 GHz, in typical builds.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: CPU != Tablet temperature
By hyvonen on 6/3/2011 3:01:27 PM , Rating: 1
This depends on too many things (like the size of the chip, or even the size of the local hot spot on the chip from which the measurement was taken).

Making a conclusion that because the CPU measurement showed 60C the tablet itself must be hot is just bad physics.

RE: CPU != Tablet temperature
By Samus on 6/3/2011 3:53:52 PM , Rating: 2
I know this is prototype hardware and early silicon... but OUCH.

Better yields will allow for higher clockspeed and lower voltages, slightly increasing performance and decreasing heat output, but realistically, x86 can not compete on efficiency.

RE: CPU != Tablet temperature
By encia on 6/3/2011 7:15:12 PM , Rating: 3
X86 can compete on efficiency i.e. AMD Z-01 APU.

RE: CPU != Tablet temperature
By Samus on 6/3/2011 7:56:28 PM , Rating: 2
I don't completely disagree. I love my HP DM1z (aside from the design flaws) but RISC is inherently superior to x86 in virtually every way, simply because it is modern by allowing software to completely reprogram how hardware compiles data. x86 is a fixed instruction set with various programmable extensions that try to make it modern.

At the end of the day, x86 is three decades old, and RISC will never show its age as it is allowed to evolve around hardware, not revolve around hardware.

ARM is the future. x86 is the past. The only reason we still rely on x86 so much is because of Intel and Microsoft, and to some extent, AMD. But Microsoft is changing the game with Windows 8. They tried to break their x86 roots with Windows NT nearly TWO decades ago, but the time wasn't right. Now it is.

RE: CPU != Tablet temperature
By phantom505 on 6/3/2011 8:44:47 PM , Rating: 2
The return of the Itanic?

RE: CPU != Tablet temperature
By SPOOFE on 6/4/2011 1:24:21 AM , Rating: 2
Not all RISC instructions are created equal. RISC refers more to one class or category of architecture, whereas x86 is a specific instruction set of a CISC architecture (and a lot of Intel's x86 extensions are themselves like RISC sub-processors).

RE: CPU != Tablet temperature
By FauxNews on 6/3/2011 9:23:14 PM , Rating: 2
RISC is inherently superior to x86 in virtually every way, simply because it is modern by allowing software to completely reprogram how hardware compiles data. x86 is a fixed instruction set with various programmable extensions that try to make it modern.

You have absolutely no idea what you're talking about.

x86 has turned it's disadvantages into massive advantages, which has allowed it to prevail over most other RISC architectures despite their "superiority".

For example, while people were bragging about PowerPC's 32 registers and how x86 was inferior with it's 8 registers, x86 turned around and came out with CPU's with hundreds of internal registers.
Suddenly it took a major disadvantage and turned it into a major advantage.

Everyone that has claimed x86 was "dead" and "inferior" have inevitibly ended up eating crow as it quickly eclipsed all of its competitors.

RE: CPU != Tablet temperature
By Samus on 6/4/2011 2:58:28 AM , Rating: 3
Interesting...ranting about integer registers makes me think you have no idea what you're talking about.

Unless you're an electrical engineer like myself, I'm wasting my time going into detail because you'd have no idea what I'm talking about, which is why I was as blunt as possible with my explaination of RISC and x86 instruction sets. You pretty much agreed with my comment indirectly when you stated x86 has evolved around extensions that are RISC in nature.

If you can't beat'em, join'em? Maybe thats why x86 processors have had 11 extensions, from additional SIMD registers to seperate floating point instructions (MMX forward) to 64-bit memory addressing. It's also worth mentioning x86 cpu's didn't even have a dedicated math co-processor until the 386. Motorola and Texas Instruments had integrated math coprocessors in their chips years before Intel, even in their consumer products.

Intel has superior manufacturing processes and that is the ONLY thing that kept them in the game for so long. If ARM had the support and manufacturing ability, we'd all be running CELL-style architecture in our desktops and Intel would be in the tenths of percentage in performance. Now that Microsoft isn't going to carry Intel's deadbeat red headed stepchild into the future, the superior architecture has a chance.

Keeping x86 alive is like refusing to replace your 1979 can upgrade it all you want, but in the end, all you've done is molested a car that is still 1979 technology at its foundation. It can not be made better than a modern car using modern technology, but there will always be the old dogs that hold onto their old crap because they refuse to change.

RE: CPU != Tablet temperature
By k20boy on 6/4/2011 1:23:24 PM , Rating: 2
Glad you have to qualify your title or educational background. Being an EE doesn't mean you are a specialist in computer architecture, nor does it mean that just because you took a computer architecture or microprocessor design course n years ago, you work in the industry and understand how the game works in practice. So, unless you are a CPU design engineer, I am skeptical of what you have to say. Of course, we all learn the textbook beauty of RISC design. Intel has proven that x86 has extreme legs, however. This extends beyond its extreme manufacturing prowess and I would argue that extensions to the x86 instruction set and some of the particular implementations on latest Intel microprocessor designs have shown that Intel can make incredible design innovations despite their inherent CISC architecture. I have heard numbers in low single digit percentages about the hit that Intel takes on x86 decode logic. With this information, it seems with our billion-plus transistor CPUs, the CISC vs. RISC debate would be over. The particular implementation is much more important than the particular instruction set. If you are arguing the scalability of the x86 instruction set, you are dead-wrong: just look at the server market (where power is a concern) and how most of the high-end RISC machines cannot compete from a pure performance perspective or even performance/watt. Just in case you needed qualification of my credentials: recent EE/Phys graduate and pursuing MSEE. Have a good day.

RE: CPU != Tablet temperature
By Samus on 6/4/2011 1:59:57 PM , Rating: 2
Yet, the worlds top 10 super computers all use RISC...

Yea, x86 is just a killer server chip.

Listen, the only reason people use x86 is because they are forced too. If you had a version of Windows compiled and optimized for RISC, much like the current version of Windows compiled and optimized for x86, I can guarantee at every performance/watt level, the RISC version would be superior in EVERYTHING but encoding\decoding as Intel's branch prediction units are far superior to everyone elses, even AMD's. This has nothing to do with x86, it has to do with Intel's engineering and R&D budget.

I can't believe you are actually disagreeing RISC is superior to CISC. It boggles my mind.

RE: CPU != Tablet temperature
By k20boy on 6/4/2011 3:08:59 PM , Rating: 2
You are exactly right. Intel's R&D has made the RISC vs. CISC debate extinct. Their design decisions, extensions to the x86 instruction set and superior process node have more than made up for any inherent deficiencies in the CISC model.

You said:
Yet, the worlds top 10 super computers all use RISC...

This may be true of single monolithic systems but that is not the way supercomputers are built today. Most use some sort of clustering. Also, I said server, not super computer, there is a large difference. Just look at any of the articles on Anandtech looking at server performance and you will see that x86 is king. Also, if I was talking about clusters or supercomputers I would point you to the Top 500 list of supercomputers running the High Performance Linpak: Notice how most of the systems use x86 CPUs and usually use GPUs as well.

Yes, THEORETICALLY, RISC is superior to CISC. Intel, however, has made this theoretical argument unimportant in practical implementations. Obviously, if one could design from the ground up and not worry about legacy software support, RISC would be the way to go (actually probably something like EPIC would be even better) and Intel would still be able to make further inroads than they have today. This is just not the way the world works and Intel has designed itself out of its problem.

RE: CPU != Tablet temperature
By Targon on 6/4/2011 4:48:05 PM , Rating: 2
It isn't just Intel, the real key is in the overall system architecture, not just CPU design. As system complexity increases, the value of CISC increases as well, while code at a very low level will favor RISC. Think about that for a moment. Yes, there is an increased need for code optimizations in the compilers with CISC, but when a single instruction will do EVERYTHING you need and behind the scenes is broken down into very neat RISC-like micro-ops, that eliminates the much of the debate about what is better.

While RISC does have the POTENTIAL to be faster, the increased code design effort generally will mean you never realize that potential.

RE: CPU != Tablet temperature
By harshbarj on 6/4/2011 2:44:26 PM , Rating: 3
It's also worth mentioning x86 cpu's didn't even have a dedicated math co-processor until the 386.

Not true at all. I consider myself an expert on vintage Intel CPU history and that statement is flat out incorrect. Intel has had dedicated math co-processors from the very first x86 CPU. Even the IBM PC 5150(introduced in 1981) had both an 8088 CPU and 8087 math co-processor slot.

Now if you were talking about an 'integrated' co-processor your still incorrect. The first x86 CPU from Intel to integrate the math co-processor was the 486DX line (initially just called the 486, the DX was added with the introduction of the 486sx to differentiate between the two products). Intel later produced the 486sx that lacked a math co-processor but was otherwise identical to the DX chip. ALL 386 processors had a separate co-processor. the 386sx was a 32-bit internal and 16-bit external chip (limiting addressing to 16mb) while the 386dx, 486sx, and 486dx were all fully 32-bit.

Lastly I would NOT want to run an arm processor on a desktop. While okay for cellphones and tablets, they are just too slow for a desktop. Just try to encode a lengthy video on an arm CPU or render a complex 3d animation. It can be done, if you have some time to kill.

RE: CPU != Tablet temperature
By SPOOFE on 6/4/2011 3:13:13 PM , Rating: 2
It's also worth mentioning x86 cpu's didn't even have a dedicated math co-processor until the 386

Only if you argument is "at one time, RISC had a superiority over CISC", but that's not your argument. Your argument is present tense. 386 is nowhere near "present", and in CPU terms is millions of years old. You might as well claim humans are inferior to fish because at one time humans didn't exist.

Intel has superior manufacturing processes and that is the ONLY thing that kept them in the game for so long

That's why AMD disappeared in the 90s, right? Right? Oh wait...

Go back to electrical engineering.

RE: CPU != Tablet temperature
By dotpoz on 6/6/2011 4:14:23 AM , Rating: 2
I agree. We are forced to keep an obsolte architecture with VARIABLE INSTUCTIONS LENGHT just for compatibility issue.

RE: CPU != Tablet temperature
By encia on 6/3/2011 10:35:37 PM , Rating: 2
My ACER Iconia W500 tablet scores 1010.6 ms in SunSpider 0.9.1 i.e. beating both Oak Trail tablet(1500 ms) and ASUS Eee Transformer Pad (1876 ms).

RE: CPU != Tablet temperature
By Alexvrb on 6/4/2011 11:30:18 PM , Rating: 2
The Bobcat cores are quick, and the GPU clinches the deal. The upcoming Z series looks to improve it even further.

RE: CPU != Tablet temperature
By encia on 6/3/2011 11:21:09 PM , Rating: 2
RE: CPU != Tablet temperature
By encia on 6/4/2011 1:14:01 AM , Rating: 4

Notice how small is the AMD's X86 decoders i.e. 1 to 2 percent of the die size.

Modern X86 CPUs has assimilated two key RISC principles i.e. translating variable length instruction into fix length and single cycle instruction throughput.

RE: CPU != Tablet temperature
By Wolfpup on 6/5/2011 4:19:01 AM , Rating: 1
"RISC" and "CISC" don't mean much anymore, and haven't for ages. So called "RISC" chips often have larger more complex instruction sets than older CISC chips. And most x86 chips haven't actually been CISC since the Pentium Pro.

simply because it is modern by allowing software to completely reprogram how hardware compiles data. x86 is a fixed instruction set with various programmable extensions that try to make it modern.

Doesn't even mean anything. They both have "fixed instruction sets".

The only thing I'm interested in here is why is an equivalently clocked first gen Atom being outperformed by a Cortex A9? I know the first gen Atom is in order, but still, wasn't it supposed to outperform Cortex A9 severely, even aside from clock speed advantages?

When power doesn't matter, everyone's picking first gen Atom over ARM...which would back up that idea.

Hence I'm wondering if there's something else going on here-like an early software build that's not optimized well for Atom, or an extra layer of emulation, or something.

"Nowadays, security guys break the Mac every single day. Every single day, they come out with a total exploit, your machine can be taken over totally. I dare anybody to do that once a month on the Windows machine." -- Bill Gates

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki