backtop


Print 90 comment(s) - last by EasyC.. on Jun 6 at 11:39 AM


Intel has big ambitions for its low power Oak Trail Atom-based platform, which it says will trash ARM processors in Android performance.  (Source: Intel)

Sadly for Intel quite the opposite proved true in early benchmarks. ARM badly beat an Oak Trail prototype in app performance and heat.  (Source: Tweakers.net)

ASUSTek's Eee Transformer Pad, powered by NVIDIA's dual-core Tegra 2 ARM CPU proved the most powerful tablet in most benchmarks.  (Source: Android In)
The only benchmark Intel's new platform performed admirably in was Javascript performance

Intel Corp.'s (INTC) was quick to brag on dramatic process improvements that would propel its Atom chips to new levels of performance during its keynote at Computex 2011 in Taiwan.  The company says it will leverage its die shrink lead on Core brand CPUs to push yearly die shrinks for Atom over the next couple years, hitting the 14 nm node a couple years before ARM manufacturers.  And it says it will deploy its new tri-gate transistors at the Atom's 22 nm node in 2013.

I. Intel Oak Trail Gets Tested

By the looks of early testing, Intel desperately needs all the help it can get.  A dual core Z6xx series atom chip running on the company's new Oak Trail chipset was shown off in a prototype design by Taiwan's Compal Electronics.

The prototype packed two CPU cores, running at 1.5 GHz.  It also packed an Intel GMA600 GPU, which is essentially a rebranded PowerVR SGX535.

The new tablet was running Google Inc.'s (GOOG) popular Android 3.0 "Honeycomb" operating system, the second most used tablet OS in the world behind Apple, Inc.'s (AAPL) iOS (found on the iPad and iPad 2).

In a limited set of tests, Tweakers.net, a Dutch hardware site benchmarked [translated] the new platform and compared it to rivals currently on the market with similarly clocked dual-core CPUs.  The picture wasn't pretty for Intel.

II. Slow

In the Caffeine 3 benchmark, the Oak Trail prototype scored a dismal 1562 points, well behind the Asus Eee Transformer Pad (Tegra 2 based; 6246 points) and the Samsung 10.1v Galaxy Tab (Hummingbird Gen. 2; 7194 points).  This was significant as Caffeine measures Java performance -- the language most Android apps are written in.  As such, the benchmark provides a key indicator of how fast apps will run on the tablet -- in Intel's case "very slow".

That result was confirmed by the Linpack benchmark, which gave a result at 9.4 MFLOPs, versus 36 MFLOPS for the Tegra 2.  Similarly the Quadrant benchmark gave a score of 1978, at the very bottom of the 2,000 to 2,500 that Android tablets regularly score.  Some Android Phones even score 2,000+. 

While these numbers aren't necessarily a bad thing for all apps (some of which are less demanding), it may mean that on Intel-based Android tablets you'll have to forgo highly demanding apps like the early crop of 3D shooter titles.

The Oak Trail tablet did show some promise, posting the best score (1500 ms) in the Sunspider benchmark, a full 376 ms faster than the fastest ARM-based Android, the Asus Eee Transformer Pad.  In other words, while Intel's platform may come up short in apps, it looks like it will handle the internet pretty well.

III. Hot

Unfortunately, two critical performance measures -- Flash performance and battery life -- were not tested.

The site did evaluate Oak Trail's temperature performance, writing [translated]:

The settings menu of the x86 port also showed how hot the Intel CPU in the tablet. In this model ranged between 60 and 65 degrees [Celsius], and that was quite obvious. The tablet on the outside felt warm, much warmer than previous Honeycomb Tablets we owned had.

Unfortunately the site did not produce any quantitative numbers to back its claims.  However, if the CPU is truly reaching 140-149 °F, that's a major issue as, at that temperature, heat conduction could make holding the case very uncomfortable (particularly given the tight casing in modern ultra-slender tablets).

IV. Hope for Intel?

There's hope on both the performance and temperature front for Intel.  It's thought that a major part of the gap in app performance may be due to optimizations in Android for the ARM architecture.  If Intel pushes hard enough, it may be able to get similar optimizations for x86 worked in.

The temperature is intimately tied to usage and clock speed, so there's no way of necessarily escaping that during times of heavy use.  However, Intel could always solve this problem by putting a small fan in its tablets.  While that might produce a "fatter" less seemly tablet, it would at least spare the user from discomfort.

And in the long term, the die shrink in Q4 2011 to 32 nm should reduce chip temperatures.

The early numbers do indicate, though, that Oak Trail and Atom-powered Android is a work in progress -- a picture that stands in sharp contrast to Intel's promise that Oak Trail would trash ARM designs in performance.  Once we get numbers on battery life we should be able to see exactly how far behind the platform is.

Notes:
The Tegra 2 is a dual-core Android by American-based NVIDIA Corp. (NVDA).  The processors are overclocked to around 1.5 GHz, in typical builds.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: CPU != Tablet temperature
By Samus on 6/4/2011 2:58:28 AM , Rating: 3
Interesting...ranting about integer registers makes me think you have no idea what you're talking about.

Unless you're an electrical engineer like myself, I'm wasting my time going into detail because you'd have no idea what I'm talking about, which is why I was as blunt as possible with my explaination of RISC and x86 instruction sets. You pretty much agreed with my comment indirectly when you stated x86 has evolved around extensions that are RISC in nature.

If you can't beat'em, join'em? Maybe thats why x86 processors have had 11 extensions, from additional SIMD registers to seperate floating point instructions (MMX forward) to 64-bit memory addressing. It's also worth mentioning x86 cpu's didn't even have a dedicated math co-processor until the 386. Motorola and Texas Instruments had integrated math coprocessors in their chips years before Intel, even in their consumer products.

Intel has superior manufacturing processes and that is the ONLY thing that kept them in the game for so long. If ARM had the support and manufacturing ability, we'd all be running CELL-style architecture in our desktops and Intel would be in the tenths of percentage in performance. Now that Microsoft isn't going to carry Intel's deadbeat red headed stepchild into the future, the superior architecture has a chance.

Keeping x86 alive is like refusing to replace your 1979 Mustang...you can upgrade it all you want, but in the end, all you've done is molested a car that is still 1979 technology at its foundation. It can not be made better than a modern car using modern technology, but there will always be the old dogs that hold onto their old crap because they refuse to change.


RE: CPU != Tablet temperature
By k20boy on 6/4/2011 1:23:24 PM , Rating: 2
Glad you have to qualify your title or educational background. Being an EE doesn't mean you are a specialist in computer architecture, nor does it mean that just because you took a computer architecture or microprocessor design course n years ago, you work in the industry and understand how the game works in practice. So, unless you are a CPU design engineer, I am skeptical of what you have to say. Of course, we all learn the textbook beauty of RISC design. Intel has proven that x86 has extreme legs, however. This extends beyond its extreme manufacturing prowess and I would argue that extensions to the x86 instruction set and some of the particular implementations on latest Intel microprocessor designs have shown that Intel can make incredible design innovations despite their inherent CISC architecture. I have heard numbers in low single digit percentages about the hit that Intel takes on x86 decode logic. With this information, it seems with our billion-plus transistor CPUs, the CISC vs. RISC debate would be over. The particular implementation is much more important than the particular instruction set. If you are arguing the scalability of the x86 instruction set, you are dead-wrong: just look at the server market (where power is a concern) and how most of the high-end RISC machines cannot compete from a pure performance perspective or even performance/watt. Just in case you needed qualification of my credentials: recent EE/Phys graduate and pursuing MSEE. Have a good day.


RE: CPU != Tablet temperature
By Samus on 6/4/2011 1:59:57 PM , Rating: 2
Yet, the worlds top 10 super computers all use RISC...

Yea, x86 is just a killer server chip.

Listen, the only reason people use x86 is because they are forced too. If you had a version of Windows compiled and optimized for RISC, much like the current version of Windows compiled and optimized for x86, I can guarantee at every performance/watt level, the RISC version would be superior in EVERYTHING but encoding\decoding as Intel's branch prediction units are far superior to everyone elses, even AMD's. This has nothing to do with x86, it has to do with Intel's engineering and R&D budget.

I can't believe you are actually disagreeing RISC is superior to CISC. It boggles my mind.


RE: CPU != Tablet temperature
By k20boy on 6/4/2011 3:08:59 PM , Rating: 2
You are exactly right. Intel's R&D has made the RISC vs. CISC debate extinct. Their design decisions, extensions to the x86 instruction set and superior process node have more than made up for any inherent deficiencies in the CISC model.

You said:
quote:
Yet, the worlds top 10 super computers all use RISC...


This may be true of single monolithic systems but that is not the way supercomputers are built today. Most use some sort of clustering. Also, I said server, not super computer, there is a large difference. Just look at any of the articles on Anandtech looking at server performance and you will see that x86 is king. Also, if I was talking about clusters or supercomputers I would point you to the Top 500 list of supercomputers running the High Performance Linpak: http://www.top500.org/lists/2010/11 Notice how most of the systems use x86 CPUs and usually use GPUs as well.

Yes, THEORETICALLY, RISC is superior to CISC. Intel, however, has made this theoretical argument unimportant in practical implementations. Obviously, if one could design from the ground up and not worry about legacy software support, RISC would be the way to go (actually probably something like EPIC would be even better) and Intel would still be able to make further inroads than they have today. This is just not the way the world works and Intel has designed itself out of its problem.


RE: CPU != Tablet temperature
By Targon on 6/4/2011 4:48:05 PM , Rating: 2
It isn't just Intel, the real key is in the overall system architecture, not just CPU design. As system complexity increases, the value of CISC increases as well, while code at a very low level will favor RISC. Think about that for a moment. Yes, there is an increased need for code optimizations in the compilers with CISC, but when a single instruction will do EVERYTHING you need and behind the scenes is broken down into very neat RISC-like micro-ops, that eliminates the much of the debate about what is better.

While RISC does have the POTENTIAL to be faster, the increased code design effort generally will mean you never realize that potential.


RE: CPU != Tablet temperature
By harshbarj on 6/4/2011 2:44:26 PM , Rating: 3
quote:
It's also worth mentioning x86 cpu's didn't even have a dedicated math co-processor until the 386.


Not true at all. I consider myself an expert on vintage Intel CPU history and that statement is flat out incorrect. Intel has had dedicated math co-processors from the very first x86 CPU. Even the IBM PC 5150(introduced in 1981) had both an 8088 CPU and 8087 math co-processor slot.

Now if you were talking about an 'integrated' co-processor your still incorrect. The first x86 CPU from Intel to integrate the math co-processor was the 486DX line (initially just called the 486, the DX was added with the introduction of the 486sx to differentiate between the two products). Intel later produced the 486sx that lacked a math co-processor but was otherwise identical to the DX chip. ALL 386 processors had a separate co-processor. the 386sx was a 32-bit internal and 16-bit external chip (limiting addressing to 16mb) while the 386dx, 486sx, and 486dx were all fully 32-bit.

Lastly I would NOT want to run an arm processor on a desktop. While okay for cellphones and tablets, they are just too slow for a desktop. Just try to encode a lengthy video on an arm CPU or render a complex 3d animation. It can be done, if you have some time to kill.


RE: CPU != Tablet temperature
By SPOOFE on 6/4/2011 3:13:13 PM , Rating: 2
quote:
It's also worth mentioning x86 cpu's didn't even have a dedicated math co-processor until the 386

Only if you argument is "at one time, RISC had a superiority over CISC", but that's not your argument. Your argument is present tense. 386 is nowhere near "present", and in CPU terms is millions of years old. You might as well claim humans are inferior to fish because at one time humans didn't exist.

quote:
Intel has superior manufacturing processes and that is the ONLY thing that kept them in the game for so long

That's why AMD disappeared in the 90s, right? Right? Oh wait...

Go back to electrical engineering.


RE: CPU != Tablet temperature
By dotpoz on 6/6/2011 4:14:23 AM , Rating: 2
I agree. We are forced to keep an obsolte architecture with VARIABLE INSTUCTIONS LENGHT just for compatibility issue.


“So far we have not seen a single Android device that does not infringe on our patents." -- Microsoft General Counsel Brad Smith














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki