backtop


Print 39 comment(s) - last by allknowingeye.. on Nov 30 at 8:43 PM

PowerPC claimed to beat Pentiums in the 90s in power efficiency and performance, but suffered from low volume

Steven P. Jobs was unquestioningly the dictator at Apple, Inc. (AAPL) throughout his fruitful career.  During his time with the company he co-founded and twice served at CEO (or perhaps more times, if you count the medical leaves), he controlled every minor detail from the cafeteria layout at the Apple campus to the specs he wanted on each of his mobile electronics devices.  

And while recent revelations in his biography that he was willing to burn his company's entire cash stockpile to try to legally destroy Google Inc.'s (GOOG) rival Android may be a tiny bit unsettling to shareholders; it's hard to argue with the pure numbers -- Mr. Jobs forged the most valuable tech company on Earth (in terms of market capitalization and profit).

I. Apple Long Planned to Ax RISC PowerPC Chips

But for all the products and transitions he spearheaded -- the iMac, the iPod, iTunes, OS X, the iPhone, iOS, and the iPad -- there appears to be one accomplishment that he reportedly was not the original driving force behind after all -- Apple's transition to Intel Corp. (INTC) CPUs in its personal computer lineup.

Apple's PC market share lay decimated when Apple acquired NeXT just before Christmas back in December 1996. With NeXT came the return of Apple CEO and co-founder (who quit Apple in 1985 to found NeXT and serve as its CEO), Steve Jobs.

Steve Jobs return
One key reason why Apple brought back Steven P. Jobs, reportedly, was to try to break free of the PowerPC architecture.  It worked, for better or worse. [Image Source: Risen Sources]

At the time of Mr. Jobs' return Apple was still using the PowerPC architecture, a reduced instruction set computer (RISC) CPU design that Apple co-created with International Business Machines, Inc.'s (IBM) and Motorola, Inc. (who would eventually divide into Google's recently acquired Motorola Mobility and Motorola Solutions, Inc. (MSI)).  The Apple-IBM-Motorola (AIM) alliance sounded like it would be a superstar.  But like most things Apple in the mid-1990s, it was struggling mightily.

Motorola, tasked with delivering the PowerPC designs in the 1990s, managed to get the G3 out the door in 1998.  And it was a solid design, if a bit late.  It went head to head with Intel Pentium II, and by many accounts it was winning.  Writes Low End Mac in a 1998 report:

The Pentium II is designed for a 66 MHz system bus and comes with a fixed 512 KB level 2 cache running at half CPU speed. That means the hotshot 333 MHz Pentium II is using a smaller, slower cache than the top end PPC 750 daughter cards.

To make matters worse, it's also a less efficient design, still rooted in the CISC technology of the late 1970s. And the beast is still an energy hog, pulling 23.7W at 333 MHz, several times as much as the efficient RISC design of the PPC 750.

The latest Bytemark pegs the 333 MHz Pentium II at 4.7 for integer performance, 5.3 for floating point. In comparison, the discontinued PPC 604e at 350 MHz hits 10.3 on the integer test and 7.6 on floating point. When Byte tested a prototype 275 MHz G3 system, it produced a 9.4 integer benchmark, 6.1 floating point.

But according to a new Churchill Club talk by Larry Tesler, an Apple veteran from the 1990s, Apple was trying to ditch the PowerPC even back then.  Apple lured Mr. Tesler away from Xerox Corp.'s (XRX) PARC project to hep build its graphic user interface operating system.

When it comes to ditching the PowerPC, Mr. Tesler remarks:

It was actually one of the reasons that the company decided to acquire Next… We had actually tried a few years before to port the MacOS to Intel, but there was so much machine code still there, that to make it be able to run both, it was just really really hard. And so a number of the senior engineers and I got together and we recommended that first we modernize the operating system, and then we try to get it to run on Intel, initially by developing our own in-house operating system which turned out to be one of these projects that just grew and grew and never finished. And when we realized that wouldn’t work we realized we had to acquire an operating system, either BeOS or Next, and one of the plusses was once we had that we could have the option of making an intel machine.

The full roundtable, which talks about Steve Jobs' contributions at Apple can be viewed below:


II. PowerPC -- Did the Better Architecture Lose?

According to Rama Menon, a senior Intel engineer, the reasons PowerPC fell behind in the race was not because of the design itself, but rather the AIM alliance's manufacturing capability and the smaller installed base.  The collapse of Apple computer sales meant there weren't many PowerPC computers on the market, which meant there wasn't much PowerPC software design going on -- at least compared to x86.  

In 1998, the game was pretty even in terms of feature size.  Intel had just released the 7.5 million-transistor Deschutes on a 0.250 µm process, just months after the PowerPC G3 7xx series launched on a 0.260 µm process (late 1998).  But Intel killed the PowerPC backers in its ability to churn out close to 10 million CPUs a year.

PowerPC G3
Apple and its supporters long claimed PowerPC to be far faster than its Intel Pentium brethren.  But the lack of independent benchmarks made it unclear exactly how big PowerPC's lead really was. [Image Source: Bryon Realey/Flickr]

In February 1999, Intel launched the Pentium III, a 9.5 million-transistor design that stuck at the 0.250 µm node.  Whatever problems there were between Apple and its fellow AIMers were about to get worse.  Motorola's counterpunch to the P3 was PowerPC G4 (74xx), a 10.5 million-transistor chip built at 0.200 µm.  Motorola had apparently gone a bridge too far and was forced to back off its claim of 500 MHz, shaving its top speed to 450 MHz.  This was an embarrassment to Apple who had been boasting of its ultra fast "500 MHz" chips.

By February 2000, Motorola finally got its 500 MHz chips working, but Intel had set loose 733 MHz chips.  Then in May 2000, Intel aired a 180 nm chip clocked up to 1 GHz.  Of course, a 500 MHz PowerPC G4 was reportedly 1.67 times as fast as an 800 MHz chip in some benchmarks.  

But these kinds of claims were always a bit hard for consumers to digest, given that there was a relative lack of equivalent software -- optimized for both the better selling CISC world and the RISC world -- to pit the architectures against each other head to head.  Indeed, sites like AnandTech seldom compared PowerPC against P3s in benchmarks -- or benchmarked PowerPC at all for that matter.

In 2001, with Motorola on the verge of bailing on PowerPC, IBM took over design duties, making the 64-bit PowerPC G5 (970), which aired in June 2003.  By 2005 Intel's Pentium 4 had overtaken the lower clocked PowerPC G5 in key metrics like server performance.

III. The Endgame: Apple Goes x86

In June 2005 Steve Jobs aired Apple's plans to kill PowerPC in its computers and transition to Intel chips.  By 2006, the golden era of x86 had kicked off at Apple.  The new chips brought with them improved compatibility and portability -- after all, if you can get past the OS difference a Mac x86 is still and x86.

In August 2009 OS X 10.6 "Snow Leopard" shipped, becoming the first Apple operating system to drop PowerPC support.  The era of the PowerPC was officially over.  But if reports are to be believed, the beginning of the end began nearly a decade and a half prior, when a Steve Jobs-less Apple began plotting to revamp its MacOS and jump ship to x86.

Snow Leopard
With Snow Leopard (OS X 10.6), Apple officially gave PowerPC holdouts the boot from its latest shiny toys. [Image Source: Google Images] 

That could have been a huge business for IBM, given that Apple is today the third largest PC maker in the U.S.  But it's hard to say whether it could have revived its market share to such healthy levels had it stuck with PowerPC and lacked base x86 application compatibility.

In doing so it might have killed a great thing.  PowerPC appeared far faster at processing and more power efficient than Intel, but the lack of a healthy quantity of unbiased benchmarks makes it hard to say if that advantage was real or but a mirage.

But for what it's worth, even as Apple would put one veteran RISC architecture's foot in the grave, it was boosting what was arguably a superior architecture -- the Advanced RISC Machine (ARM, for short).  Today thanks to Apple's early adoption in the iPod (and later Palm Inc. following in suit), ARM became the de facto standard for mobile devices, proudly carrying on the RISC tradition and enjoying a discrete chuckle at Intel's struggling market entry attempts.

Sources: YouTube, Forbes





Comments     Threshold


This article is over a month old, voting and posting comments is disabled

ARM
By Da W on 11/14/2011 3:59:48 PM , Rating: 4
And here's why we won't see ARM succeed on PCs or x86 succeed in phones anytime soon. you can't move what is deeply entrenched.




RE: ARM
By qwerty1 on 11/14/2011 4:14:04 PM , Rating: 3
It really depends on 3rd party software support. If ARM is able to get devs to port staple PC software over, I can see a possibility of its chips in the PC space as the base OS will be already there (Windows 8 will support ARM).

The reverse could also be said for Intel entering tablets and phones - though admittedly MS is on a lot weaker footing in those spaces to act as the software support for such an endeavor.


RE: ARM
By kleinma on 11/14/2011 4:44:25 PM , Rating: 3
The biggest problem is going to be when the ARM based windows 8 devices start selling, and people find there are all sorts of programs that will not run because it is x86/x64 based. At least much of the recent .NET based stuff should work without too many headaches since much is compiled on the fly from IL code, but there will be massive issues none the less.


RE: ARM
By sprockkets on 11/14/2011 5:51:57 PM , Rating: 1
From what I've read from ars, the win8 arm port will be Metro only, aka no traditional interface or desktop, nor will it ever run anything from the traditional side, since it also will not support anything but HTML5 apps.

That's right, no .net, no silverlight, no flash.

Put it simply, Win8 for ARM is just the tablet OS for tablets.


RE: ARM
By Wiggy Mcshades on 11/14/11, Rating: 0
RE: ARM
By ajcarroll on 11/15/2011 8:57:00 AM , Rating: 2
> This is 100% wrong
That's not what I took away from WinBuild. There is indeed C# language bindings for HTML5 apps running in Metro, so you can absolutely write HTML5 apps that run in Metro that use C#, however this does not equate to there being general ".net" / "CLR" support in Metro. (Thus the original poster's comment of "no .net under metro" is not "100% wrong"

Windows 8 deploys IE in 2 different modes - Metro and traditional Windows desktop. If you switch to desktop mode, then yes, you can absolutely run existing .net apps, but when running under the Metro, while you have a number of language bindings, you don't have all the Libraries & APIs.

I had not heard that there was no desktop mode in the ARM port of Windows 8, but that makes a lot of sense. It would take a decent chunk of work to get it all working on MSs part - seems like something they may indeed choose to leave out.


RE: ARM
By Wiggy Mcshades on 11/15/2011 1:06:06 PM , Rating: 2
If the applications aren't running on the CLR then that would mean they would have to be running native code. That just isn't possible if you are using html5 for any part of the application. When using html or xaml, each time the program opens the html/xaml files are read and a ui is generated based on what the file outlines (simplified explanation). An application running native code can't generate its UI at run time because everything is precompiled. It's not a stretch to assume they are actually using the html in these html5 applications, so we can get rid of the possibility of native code. Now unless Microsoft decided they would use a some other virtual machine instead of their own then It's not possible that the CLR is absent from windows 8. The only other place to run an application would be from within the browser, but the libraries the windows 8 samples applications are using can't all be accessed using the limited security privileges given to a web app running on IE.


RE: ARM
By ajcarroll on 11/15/2011 4:30:32 PM , Rating: 2
Sure, literally the CLR itself (ie. the actual Virtual Machine) does, by definition still exist, however the impression I got at WinBuild was that APIs have changed. ie. .net as we know it today does not run under Metro in Windows 8 - so you get the language binding to any CLR supported language, but the APIs will be the WinRT.

The main point I was trying to clarify was that .net as we know it (which encompasses a whole lot more than just the VM), is not supported under Metro. ie. Existing .net code, that uses any .NET APIs won' run under Metro.

Thus the original posters comment that .net was gone, was not "100% incorrect".

But I agree with you the actual CLR itself remains in order to support execution of C# and other CLR languages.

I think the original poster also suggested that if the ARM port of Win8 is Metro only. If this is correct, then you won't be run existing .NET code on an ARM port of Win8, because despite the existance of the CLR the other run time components are not supported under Metro.


RE: ARM
By name99 on 11/14/2011 7:08:09 PM , Rating: 1
quote:

And here's why we won't see ARM succeed on PCs or x86 succeed in phones anytime soon. you can't move what is deeply entrenched.


Right. This is all about entrenched ARM. Poor Intel, with their little Atom that is just so superior at power efficiency compared to ARM just can't catch a break.

Remind me, because I must have missed that article on AnandTech --- when exactly did Intel ship an Atom CPU (let alone an Atom SOC, including GPU) that gives anything CLOSE to the low power demands of ARM, and required by the phone and tablet markets?


RE: ARM
By Reclaimer77 on 11/14/2011 9:20:31 PM , Rating: 1
Tri-Gate will change all of that. It will completely eliminate all the problems X86 has on mobile devices. And by eliminate I mean blow them away and make whatever mobile chip Intel releases an overnight market-winner.

I'm calling it right now, ARM is in big trouble.


RE: ARM
By retrospooty on 11/15/2011 6:26:21 AM , Rating: 2
"when exactly did Intel ship an Atom CPU (let alone an Atom SOC, including GPU) that gives anything CLOSE to the low power demands of ARM"

This is true, but dont count Intel out. Intel has been behind with Atom and their manufacturing process, concentrting on Core processors. Atom's are still on 45nm process, while Core i3/5/7 are on 32nm for almost 2 years and about to go 22nm in a few months. Intel has announced they are changing that and Atom will be on 22nm soon and moving forward will be using the latest to be more competitive.

2 process shrinks is a lot, and no-one does that part better than Intel. Intel was putting out 32nm CPU's in mass while TSMC and the others were struggling with 40nm. If they direct focus on Atom, they can make it work.


RE: ARM
By Ammohunt on 11/14/11, Rating: 0
Good Read
By ICBM on 11/14/2011 5:27:53 PM , Rating: 2
Good read.

I only picked up my first PowerPC Powermac in 05, and a used one at that. I have maxxed the RAM and put an SSD in and the machine still runs well and is my main machine at the house. Being an avid PC Gamer since the early 90s, my peers and I always dogged the macs and PowerPC in general. Honestly, I can't say we were wrong, and I can't say there was anything better or faster about it compared to Intel or AMD offerings.

However it was different, and it was at least comparable. This is what attracted me to the platform. I wanted something different that could at least be somewhat competitive and it did that. I was always curious how things worked on the other side. I would be very curious to peak into an alternate reality where Apple stuck with PowerPC. I believe the x86 move helped grow their business tremendously, but I believe the popularity of the iphone and ipad would have been enough to still prop Apple to be in the top 5, if not the top 3 computer manufacturers with PowerPC in tow. Laptops seemed to be the biggest issue, with the G5 never making a mobile appearance, so maybe not.

With PowerPC, Apple could actually say "Think Different" and mean it. Now "Me Too" is more appropriate, and more practical. From a business perspective it makes perfect sense. Still the desire for something radically different in me wishes they never changed.




RE: Good Read
By name99 on 11/14/2011 7:31:49 PM , Rating: 5
There's nothing really secret in what Tessler is saying. I worked at Apple during the relevant period, on close to the metal assembly code, and I and my colleagues were well aware of both the strengths and weaknesses of x86. A particular problem with PPC until the 970 was memory performance --- PPC was just way behind Intel, and things like AltiVec were never really able to show their true strengths because they couldn't stream data in and out fast enough.
And it's no secret that Apple maintained the ability to compile its code to x86 from day one of OSX, with much of mac OS classic being able to compile on x86 through the QuickTime on Windows project.

But the real issue, which no-one has mentioned, is that PPC was not competitive in the mobile/low-power space. The fact that 970 was competitive on the desktop was not enough for Jobs, who was well aware that the future was with laptops and mobile devices. This was what forced the break --- and why the interpretation in this article is misleading. Jobs felt that mobile was essential; IBM was simply not that interested in the compromises mobile required. IBM could deliver performance (and still can --- POWER6 and POWER7 are amazing CPUs) but wasn't interested in the sort of engineering and tradeoffs required to create a 10W Good CPU rather than a 200W incredible CPU.
Jobs is the sort of person who was willing to look at this and say that low power mattered --- mattered enough to bet the company on. That's the sort of decision everyone is willing to say they could see, and were part of, in retrospect, but I don't think it was *completely* obvious at the time just how dominant mobile was going to become.

[As evidence, I'd say that Windows at the time did only an adequate job on laptops --- vendors had to play around with SMM to get things to work OK, and Windows laptops of the time never had the integrated hassle-free feel of Mac laptops of the time. And, of course, we saw the same thing with iPhone --- people talked about really portable computers, but only Jobs really bought into the vision, and was willing to pay for the engineering and make the necessary decisions --- including ditching Intel for ARM (again because Intel could deliver the performance but not the low power required).]


RE: Good Read
By TakinYourPoints on 11/14/2011 10:27:53 PM , Rating: 2
Great post, and you're exactly right. Powerbooks were stuck on the ancient Motorola G4 when the desktops moved onto IBM's G5. This is the same G5 that required heat pipe cooling because they ran so hot, and similar to the PPC chips that had massive failure rates on the XBox 360.

The fact that nobody was willing to produce PPC suitable for mobile devices was a huge reason to go Intel. Core 2 on the other hand was perfect for laptops, and Intel has continued to do an excellent job optimizing them for smaller and smaller enclosures while (slowly) improving their IGPs, eliminating the space requirement for dedicated graphics on smaller machines.


RE: Good Read
By orthorim on 11/15/2011 8:19:17 AM , Rating: 2
+1

I remember reading that Steve Jobs knew about Centrino when Apple axed PPCs. He might even said so himself publicly - the Centrino was the final death knell for PPC. Jobs knew that everything was shifting to laptops and mobile.

Intel had blazed away with the Pentium in the MHz wars and when that didn't' work anymore - a P4 in a laptop was a joke, basically, and also they found that they could clock them past 3GHz - it turned out they had another architecture under development which was perfectly suited to the task: The P3 based Centrino. It was an impressive move. They blindsided both PPC/IBM and AMD, which up to that point were chasing the P4.

The water cooled G5 didn't give anybody reason to believe that PPC would be able to go mobile. Apple was stuck with G4s that they eventually over clocked to get decent speed out of them. They had nothing to compete with Intel's Centrino, a high-performance, low-power chip, the predecessor to the Core series.


RE: Good Read
By ICBM on 11/15/2011 10:07:21 AM , Rating: 2
Correct me if I am wrong, but IBM did come out with a mobile G5, but they announced this after Apple announced their intentions to move to intel. I don't remember any specifics on the mobile G5's, but I assume they weren't terribly competitive in regards to power consumption.

I don't think its completely fair to say PPC couldn't have worked out well in the mobile sector. We see lots of small deviced powered by PowerPC still. Routers, raid controllers, to name a few. I would assume if IBM or Moto had a team focused on mobile performance, we would have had a competitive design.


RE: Good Read
By Chadder007 on 11/20/2011 11:44:40 AM , Rating: 2
Speaking of Moto and Powerpc.....did Google obtain any of the Powerpc patents in the acquisition deal??? I think that would be a very interesting find if so.


RE: Good Read
By erikstarcher on 11/15/2011 5:09:50 PM , Rating: 2
Centrino /= CPU. Centrino was a CPU, Chipset, and Wireless card platform. The CPU was the Pentium M processor. I agree with everything else you said, but to speak of Centrino as if it were a CPU just makes it look like you don't know what you are talking about.


RE: Good Read
By Reclaimer77 on 11/14/2011 7:32:17 PM , Rating: 2
quote:
However it was different, and it was at least comparable.


Yeah for only 5 times the cost of a comparable PC... :)


If Motorola only had continued 68k development !!!
By Boushh on 11/15/2011 11:06:53 AM , Rating: 2
What amazed me is that Motorola started a new CPU design, already having the 68k which was widely used during the 80's and 90's (SUN, Apple, Amiga, Atari, etc.).

If Intel could come from an 8-bit CPU to an 64-bit CPU, why couldn't Motorola do the same (and they didn't even needed to start at 8-bit, because the 68k was already an 32-bit design) ?

That would have prevented the transitions the Mac needed to make. And it would also had helped the Amiga (even though Commodore went bankrupt in 1994). Now the Amiga is still stuck with the completely over priced PowerPC chips !!

At least I still could have used my old Macs (still love my Quadra 650) with some recent software :-p

Oh, and I love programming the 68k in assembler ;-)




By marsilies on 11/15/2011 12:52:59 PM , Rating: 2
You're forgetting that when Intel initially went 64-bit, they did so with an entirely new architecture, Itanium (IA-64). It was AMD that originally extended x86 to x64, and Intel had to follow suit after the failure of Itanium.

Starting with a whole new architecture has its advantages, and continuing to update an older architecture has its disadvantages, but in the case of x64, the advantage of backwards compatibility won out over IA-64.


By allknowingeye on 11/15/2011 6:49:30 PM , Rating: 2
But they did just that. They developed the 88K Family of RISC chip. Wikipedia completely misses the fact the major user of this chip (and the 68K family) was the Telecommuncations industry. The 88k is still in very wide use today, in fact if you made a phone call today whether landline or cell your call was quite likely processed at some point by a multi 88K processor arrangment.


By Shadowself on 11/16/2011 8:09:34 AM , Rating: 2
The original implementation of the 88k was the weirdest beast ever. Among other things:

The CPU and Cache required five separate chips for a full implementation.

It was designed with an strange internal "backplane" so that it could be extensible from the beginning (the original plan was to add a graphics unit into the second or third generation and possibly a dedicated DSP or other dedicated numerics units)

It had a truly horrible pipeline for double precision floating point divides: it took 16 clock cycles for each double precision floating point divide. -- True, back in those days people were used to avoiding double precision divides like the plague, but 16 cycles was a joke even in that day.


By allknowingeye on 11/30/2011 8:43:53 PM , Rating: 2
It was not designed for traditional workstation or desktop type of configuarion. It was designed specifically to meet a very specialized requirement for the telecommunications industry. The FP performance is totally irrelevant in this case, it was basically not used much. The issue of multiple chips was a strength not a weakness, bunches of these chips were used in parallel which is why it had a strange backplane, but not strange to it's primary intended application.


By Shadowself on 11/16/2011 7:58:26 AM , Rating: 2
If you've really programmed in 68k assembler you know that the original 68k was not a true 32 bit chip.

It was a strange hybrid of 16/24/32 bit chip. The 24 bit address space was truly the weirdest part and caused major issues when Motorola (and subsequently the OSes built on the chip) went to a true 32 bit system. All that dense packing and using that "spare" 8 bits (between the 24 bit and 32 bit edges) caused a LOT of problems for legacy code.

Additionally, there was some legacy "stuff" left over from the true 16 bit 6800 chip that I had wished had never been carried over.

The problem with the 68k development basically boiled down to a few bad decisions:
1. The truly horrific merge of the 68030 and 68882 with too many things left out. The 68040 was faster at many things, but for the hardcore, compute intensive community it was a bust.
2. The failure of the 68050 (and subsequent dropping of it even before it was fully developed) to focus on the 68060 (which never made it out in any real quantities).

The 68000 was a great chip in its day.
The 68020/68881 combination was even better.
The 68030/68882 combination was marginally even better.

However Motorola made many, many errors evolving it after the that stage and eventually -- and rightfully -- killed all future development.


Energy Hog
By Jamor on 11/15/2011 6:55:45 AM , Rating: 2
quote:
And the beast is still an energy hog, pulling 23.7W at 333 MHz,


I had forgotten! Oh to be able to once more call energy draw like that an energy hog.
Current energy efficiency my @ss...




RE: Energy Hog
By Church of Dirac on 11/15/2011 9:35:10 AM , Rating: 3
Haha really? The top of the line Core i7 Extreme 3960X has a TDP of 130W at the turbo boost speed of 3900MHz, but you get 15MB of cache and 6 cores. So it's really only 21.6W/core. Or you could get an Atom Z500 with a TDP of 0.65W at 800MHz, which would still blow that old CPU out of the water.


RE: Energy Hog
By Jamor on 11/15/2011 4:42:28 PM , Rating: 2
Really. Of course the current chips are faster.

Top of the line chip used to draw 30W, now it draws 130W.
Doesn't matter to me if it's simply faster or if it's faster by having multiple cores.
What matters is the current chip has 4x the power draw.

Guess this is partly the reason why nettops and laptops are so popular these days.


Hmm...
By sprockkets on 11/14/2011 5:21:57 PM , Rating: 2
quote:
Apple's PC market share lay decimated when Apple acquired NeXT just before Christmas back in December 1996. With NeXT came the return of Apple CEO and co-founder (who quit Apple in 1985 to found NeXT and serve as its CEO), Steve Jobs.


From what I've read of John Sculley's account, John had to fire him for being, wait for it... a douche bag.

http://www.mac-history.net/the-history-of-the-appl...

quote:
Steve was adamant about blaming John Sculley for everything that had happened. He felt that John had betrayed him and he had little faith that Sculley or anyone else could manage Apple without him. He said that his role as chairman was completely ceremonial, and it left him with no actual responsibilities. In fact, Apple had already moved his office from Bandley 3 to Bandley 6, a small building across the street that was almost empty. The new office was so remote from day to day operations that it later was nicknamed "Siberia".


That guy is such a jerk. Everytime he can't get his way, it's a "betrayal," just like with Eric Schmidt.




Altivec - Games Consoles and IBM
By jecs on 11/14/2011 6:01:28 PM , Rating: 2
One great strength included in the PowerPC G4 and G5 was AltiVec, it was a floating point and integer SIMD instruction set module that produced a respectable speed up (up to 4X) on some Photoshop filters and/or some image manipulation tasks among others. But in the end AltiVec alone couldn't justify the PowerPC architecture.

A rumor at that time I can't confirm was that IBM dedicated almost all its resources to develop the highly lucrative RISK CPUs inside current game consoles (Xbox and Playstation 3) but got behind with Apple less lucrative G5s. It apparently was to much for Apple, or for Steve Jobs. Although clearly he explained OSX included INTEL-x86 code from the very beginning in the INTELMAC announcement. It validates this "new" evidence.

But what would happened if the PowerPC G5 performed really well above INTELs X86, would Apple still "had" to switch to INTEL? This evidence indicates Apple would had made the switch anyway but a strong PowerPC G5 may had delayed de decision.




Arguably superior
By name99 on 11/14/2011 7:03:28 PM , Rating: 2
quote:
boosting what was arguably a superior architecture


OK, I'll bite. Explain three ways in which ARM is a superior ARCHITECTURE.
In your answer, please note that architecture is NOT the same thing as micro-architecture, and has nothing to do with sales volume or other popularity metrics.




By Shadowself on 11/14/2011 7:28:03 PM , Rating: 2
quote:
But if reports are to be believed, the beginning of the end began nearly a decade and a half prior, when a Steve Jobs-less Apple began plotting to revamp its MacOS and jump ship to x86.


This was an open secret: the StarTrek project dated back to before the Mac 8400 (with its 68040 and Ti DSP chip). (Back in 1993 for those history buffs.)

The thing that pushed it was that Motorola did the first G5 chip -- and it had a fatal flaw (when the bug surfaced -- admittedly rarely) it brought the entire CPU down for a hard power recycle. When it did not run into that bug it was a truly screaming fast chip. The flaw required almost a complete redesign of certain core pieces of the chip. Motorola decided that it would never be cost effective to sink the money into that much of a redesign and killed it.

IBM's G5 was mediocre. It was OK by competitive standards, but the little guy (in market share) has to be better than just OK to survive. Additionally, the only guys doing interesting work moving the PowerPC chip forward where from P.A. Semi (a company Apple eventually bought).

While there were those that argued otherwise (see Jon "Hannibal" Stokes article at http://arstechnica.com/old/content/2005/10/5486.ar... ) the PowerPC designs were, as a generic architecture, moving more slowly than the Intel one.

While there had been a lot of people pushing for Intel based Macs for over 12 years before "The Switch", Steve Jobs pushed the button.




Actually it was Lion
By Shadowself on 11/14/2011 7:38:46 PM , Rating: 2
quote:
With Snow Leopard (OS X 10.6), Apple officially gave PowerPC holdouts the boot from its latest shiny toys.


If you did an "Upgrade" from Leopard (10.5) to Snow Leopard (10.6) it still retained -- and used -- the PowerPC translators. You could run PowerPC based applications without problem (e.g., that horrible banking software called Quicken which people still use for some inexplicable reason -- Intuit still hasn't migrated their core software to OS X on Intel in over six years).

However, no matter how you do it, any upgrade to Lion (10.7) kills PowerPC capabilities. You can always run Snow Leopard as a dual boot or in a virtual machine to get that legacy support, but if you're moving 100% to Lion you *must* say goodbye to any PowerPC code.




By The Raven on 11/15/2011 12:09:29 PM , Rating: 2
quote:
it's hard to argue with the pure numbers -- Mr. Jobs forged the most valuable tech company on Earth (in terms of market capitalization and profit).
I wouldn't call it the most valuable TECH company on earth. Tablets, PMPs, and Phones and a digital distribution catalog? I'd say consumer electronics is a better catagorization.

Your statement is a slap in the face to pharmaceutical companies. Let's see Apple cure genital warts and then you might be onto something ;-)




Would have been interesting
By aliasfox on 11/15/2011 1:28:12 PM , Rating: 2
No particular attachment to PPC, but my 2002 dual processor 1.25 GHz G4 still acts as my main media machine - fast enough for all SD playback, with three HDs spinning around inside it (the newest one is five, maybe six years old?).

The G3 and G4's weakness was that they couldn't be clocked anywhere near as high (or as quickly) as the x86 chips of the day. A 20-30% gap in clock speed wasm manageable from a performance perspective (if not marketing), but it was a bitter pill to swallow when Apple was selling dual 500MHz machines (for $3500) when 1 GHz x86 machines were available for $2k.

The next issue is that the G4 was never designed with DDR memory in mind - sure, later systems (including mine) used DDR, but could only read it at SDR speeds - what use is an efficient processor that couldn't get data? Clock for clock, the G4 was pretty much a match for first generation Centrinos, assuming they were properly fed. I believe it wasn't until 2007 that Motorola finally had a version of the G4 that could properly use DDR.

IBM's G5 fixed all of these issues - Anand compared a 2005 G5 against a 2010 Mac mini, and in some ways found them not too far apart in performance. Too bad it drew 10x as much power doing it.




Small user base is incorrect.
By allknowingeye on 11/15/2011 4:01:56 PM , Rating: 2
One of the major misconceptions about the PowerPC platform is who it was created for and who the major customer was. Yes it is true and Apple and AIM alliance where major forces behind it and customers. However the biggest customer was probably the Telecommunications Industry. They also had a lot of influence on the design. These chips were purchased by the SHIPLOAD for use in Digital Telecom Network Switching Machines (still really the largest and most expensive computing machine on Earth although you will never see it on any list). These chips were used in virtually every part of these machines and are still to this day the workhorse of the PSTN around the world (PSTN = Public Switched Telephony Network).




ARM's history
By vailr on 11/15/2011 9:44:21 PM , Rating: 2
The article didn't mention one important point: Apple once owned 100% of ARM, but decided to sell it off in order to have enough cash to run it's primary business of selling Macintosh computers. Which weren't selling all that well around 1996 (the NEXT buyout & Steve Job's re-hiring at a salary of $1/year) time period.




Thanks!
By gescom on 11/16/2011 4:21:32 AM , Rating: 2
Wow, great story, thanks for very interesting reading.
Cheers,g.




"This week I got an iPhone. This weekend I got four chargers so I can keep it charged everywhere I go and a land line so I can actually make phone calls." -- Facebook CEO Mark Zuckerberg













botimage
Copyright 2015 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki