Print 60 comment(s) - last by SlyNine.. on Sep 15 at 8:20 AM

Surprise! Microsoft's Internet Explorer 10 preview was gracefully running an ARM CPU, unbeknownst to the audience. Microsoft employees let this little secret out later at the conference.  (Source: Engadget)
Watch out Intel and AMD, power efficient ARM processors will soon be able to run Windows

At CES 2011, Microsoft Corp. (MSFT) CEO Steve Ballmer showed off an early build of a next generation Windows operating system running on an ARM architecture CPU.  This week at Microsoft's MIX Developer Conference in Las Vegas, the company gave developers a surprise Easter egg -- a preview build of Internet Explorer 10 and its underlying version of Windows were running on a 1 GHz ARM processor.

Samsung Electronics (005930), Texas Instruments Inc. (TXN), Qualcomm Inc. (QCOM), NVIDIA Corp. (NVDA), and other ARM chipmakers have all been hard at work cooking up power savvy multicore offerings, which would be perfect for a netbook or notebook.  

Versus similarly clocked x86 processors from Intel or AMD, ARM processors would likely squeeze out an hour or two of extra battery life.  While die shrinks and the ever-rising leakage current may eventually largely negate this advantage, in the short term ARM presents the first compelling consumer alternative to x86 in decades.

Windows 8 is expected to insert Microsoft's Ribbon UI element into more locations, including Windows Explorer.  It is also expected to have deeper touch integration and tie together the PC version of Windows with the Metro UI that Microsoft developed for the defunct Zune and Windows Phone 7.

But the addition of ARM support is perhaps the most anticipated feature.

While ARM currently offers power advantages, how compelling a buy Windows ARM portables will be still remains to be seen.  By offering base Windows support, including access to its Office suite and other enterprise tools, Microsoft makes ARM accessible to the everyday consumer.

But exactly how far Microsoft is able to go with its compatibility efforts remains to be seen.  If Microsoft can add ARM support for the Direct X and sound libraries, for example, it would be a relatively trivial exercise for developers to recompile their executables for ARM-architecture Windows 8 computers.

Microsoft makes the world's most used development environment, Microsoft Visual Studio.  By adding tools to make it quick and easy to switch from x86 to ARM builds, Microsoft could make applications compatibility complaints largely a moot point.  

Likewise, if Microsoft can embed an ARM-specific virtual machine in the OS with an x86 emulation layer, it might be possible to run native x86 apps, as is, without recompilation.  This would be helpful in cases where a company didn't have the source and the application developer was unresponsive or unwilling to make the change.  Implementing the same sort of system to provide ARM emulation in x86 Windows would be even more helpful to ARM, because it would allow developers to effectively target the more efficient ARM architecture, while ignoring x86.

Ultimately the question also still remains how low Intel can price its options and how big the true gap in power efficiency will be.  Unlike in the past, Intel may now find its pricing ability hindered by new international scrutiny that prevents it from resorting to anti-competitive arrangements to try to stomp out pesky rivals like ARM. But the exact picture is unclear.

Even more unclear is the fate of Microsoft tablets.  Even if ARM takes off in the notebook space, it may do little to help Microsoft sell Windows tablets, with Apple and Android so deeply entrenched.  In that regard, Microsoft may find that it's just given ARM a free ride to major expansion.  If that's the case Microsoft's customers should still reap minor gains -- a positive for the company -- but Microsoft itself may not make significant in-roads in its market expansion hopes.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

You have got to be kidding me
By DanNeely on 4/13/2011 2:42:04 PM , Rating: 2
Likewise, if Microsoft can embed an ARM-specific virtual machine in the OS with an x86 emulation layer, it might be possible to run native x86 apps, as is, without recompilation. This would be helpful in cases where a company didn't have the source and the application developer was unresponsive or unwilling to make the change.

Emulating a different CPU architecture typically has order of magnitude or worse performance penalties. Running arm binaries on a high end PC is feasible, running modernish PC binaries on an ARM chip is absurd. You might be able to run apps from a decade ago at an acceptable performance level, but the number of those that matter today is infinitesimal.

RE: You have got to be kidding me
By nafhan on 4/13/2011 3:14:46 PM , Rating: 2
Obviously, running in an emulator is almost always going to be worse than running natively. However, the possibility of running in an emulator is almost always preferable to not being able to run at all, which is why emulators exist in the first place.

Also, a fairly high percentage of the apps consumers run on x86 hardware is only using a tiny fraction of the hardware's capability. So, an "order of magnitude" performance hit - while not ideal - might not even noticeable.

RE: You have got to be kidding me
By chaos386 on 4/13/2011 7:19:09 PM , Rating: 2
Also, old software that isn't being developed anymore is exactly the sort of thing you'd need an emulator for. Something current and actively developed has a higher chance of being recompiled for a new architecture (assuming MS ports all their libraries to ARM as well).

RE: You have got to be kidding me
By Fritzr on 4/14/2011 12:01:46 AM , Rating: 2
"Modern apps" will be compiled for the market they are selling into ... with Windows on ARM, anybody looking for more sales will compile for ARM.

Legacy software that is no longer supported is what the x86 emulation layer is for. This strategy was used many years ago when Windows previously supported non-Intel architectures.

Some of the Amiga OSes used this technique to be architecture independent. The code is compiled using a virtual ISA and the interface with the CPU is a native code runtime layer that is at most a few hundred KBs of code. Since the code it executes is fixed, the execution layer can be optimized for the target ISA. No need for the thousands of "what if they use X?" library modules that bloat general purpose software including Windows itself.

When the OS is installed, the ISA is identified and the optimized execution layer is installed. As new architectures are supported (including new extensions such as SSE) new execution layer versions are created, each optimized for a particular ISA. Vendors will be tempted to add "what if" branching to allow single execution layer modules to support multiple variations, but that is the road to poor performance.

Wouldn't it be nice to be able to buy packages that are "Windows compatible" knowing you can take it home without worrying about which machine architecture you are using? ...including ISAs that may not have existed when the binaries were compiled :D

x86 supports that model today with bloated libraries that go through thick layers of "If this instruction exists then do else do" that sap performance. Or they are native coded to a base ISA and all later performance enhancements are left out.

"You can bet that Sony built a long-term business plan about being successful in Japan and that business plan is crumbling." -- Peter Moore, 24 hours before his Microsoft resignation

Copyright 2015 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki