backtop


Print 148 comment(s) - last by TheGreek.. on Sep 24 at 2:58 PM


Intel announces 45nm "Nehalem" at IDF during Paul Otellini's keynote  (Source: DailyTech, Brandon Hill)

Intel's Paul Otellini holding up a "Nehale" wafer  (Source: DailyTech, Brandon Hill)
Intel's largest architecture overhaul in decades is less than a year away

It wasn't that long ago that predictions of doom and gloom pinned Intel between a rock and a hard place.  The company's NetBurst architecture didn't scale and its Itanium architecture didn't sell; it looked as if for the first time in history, Moore's Law was in serious jeopardy. 

All that changed, to some extent on a whim, with the Israeli-developed mobile processors.  The mobile Core architecture would eventually replace Intel's entire NetBurst family, and the company vowed a new development cycle that would assure the company never pigeonholed itself in the same manner again: Intel's "tick-tock" philosophy.  The company will replace its processor node every two years, followed by a new architecture design every other year on the mature processor node.

Nehalem chief architect, Glen Hinton, tells DailyTech the philosophy behind 731 million transistor, 45nm Nehalem is an extension of the approach to Penryn and 65nm Core 2 Duo processors: a universal, robust core design that will scale from mobile to server applications.

"We wanted to build the highest performance per core that could be used in notebooks all the way to high end servers," stated Hinton.

The Gigahertz War has officially shifted to the Multi-core War.  However, instead of fighting a pitch-battle the company will focus on improvements that allow multi-core systems to scale without forking development trees.  Hinton emphasizes the company spent extensive resources improving single-thread performance, for example. 

An integrated memory controller and new QuickPath interface will probably steal the limelight for these new single-thread improvements, but wait, there's more.

Hyper-Threading will make its long awaited return with Nehalem, yet Hinton claims symmetrical multi-threading is a far cry from the Hyper-Threading found on NetBurst.  Nehalem will allow the operating system to dynamically power down threads -- so while an eight-core Nehalem processor will appear as 16 logical cores to the operating system, these threads can be powered down on-demand. 

Like AMD's Barcelona architecture, Nehalem will allow the operating system to dynamically power and sleep other components of the processor including individual cores and cache components.

Nine months later, it looks like Nehalem is following in the same footsteps at Penryn.  Today Intel CEO Paul Otellini announced the company taped-out the processor three weeks ago.  Otellini demonstrated a Windows XP machine running Nehalem, and claims the processor boots Mac OS X as well.

Neither Otellini nor Hinton would hint when Nehalem will see its first ship date, though Penryn is slated to ship almost exactly 11 months to the date of its tape-out announcement. Nehalem could potentially launch in the late summer of 2008 – 11 months from the initial tape out date.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Good News
By Master Kenobi (blog) on 9/18/2007 1:03:34 PM , Rating: 2
Good to see Intel is on track to meet their projected launches. There was much skepticism around them sticking to that tick-tock launch pattern, so far so good it seems.




RE: Good News
By Anh Huynh on 9/18/2007 1:19:07 PM , Rating: 3
Intel can stick to tick-tock because it has consecutive design teams working on different projects in different parts of the world, just for processor design.

IIRC AMD only has one design team. Don't quote me on that though.


RE: Good News
By Oregonian2 on 9/18/2007 1:50:40 PM , Rating: 3
Don't know if it's still true, but Intel used to have multiple teams working separately on the SAME project in competition. Whomever was first/best/whatever got produced and released and the other group had their work canceled.


RE: Good News
By imperator3733 on 9/18/2007 2:51:17 PM , Rating: 2
I wouldn't want to be on the team that lost...

Seems like it would be a waste of resources, although the teams might be a bit more motivated.


RE: Good News
By mxzrider2 on 9/22/2007 3:22:15 AM , Rating: 2
this way is the best way to increase productivity and helps create the best possible product


RE: Good News
By Justin Case on 9/18/2007 7:35:20 PM , Rating: 5
Makes you wonder who lost to Netburst...


RE: Good News
By Ringold on 9/18/2007 9:51:52 PM , Rating: 2
Homer Simpson making an attempt at a second job?


RE: Good News
By theapparition on 9/19/2007 12:07:45 PM , Rating: 2
Do you mean who lost to the most successful processor generation in Intel's history?

At the end it was certainly not the best, but it took AMD quite a while to surpass Netburst. All while raking in tons of cash for Intel. Sounds like they did OK to me.


RE: Good News
By Justin Case on 9/19/2007 12:38:31 PM , Rating: 3
Commercial success is hardly proof of inherent quality or technological superiority

In terms of performance it certainly didn't take AMD long to surpass Netburst (Netburst was even slower than the PIII, at first, and the K7 was already faster than the PIII). The only thing that gave Netburst an advantage over the K7 (in a very restricted range of applications) was SSE2.

Intel sold more chips because they have better marketing, established deals with large OEMs and higher production capacity. Oh, and because they blackmailed some companies out of selling chips from the competition, but nevermind that now. Intel is 10 times the size of AMD, of course they're going to sell more. GM also sells more than Lamborghini (they still do, right?).

But the point is that Intel would have sold just as many units (probably more) if they'd gone with an evolutionary, power-efficient design based on the PIII. They could have called it, I don't know... maybe "Pentium-M" or something like that. ;)

Netburst was the marketing department's dream: GHz, GHz, GHz. It's so much simpler to convince people that a chip is faster when it has a big number after the name. But look at what happens when you let the marketing department run the company: the K8. And all of a sudden your competition has deals with every OEM out there, and a foot in the server market. Oops. How long do you think it'll take Intel to forget that lesson?


RE: Good News
By TomZ on 9/19/2007 12:40:48 PM , Rating: 1
But remember in business, commercial success is always more important than technical success.


RE: Good News
By Justin Case on 9/19/2007 1:47:34 PM , Rating: 2
Did you miss the 4th paragraph?


RE: Good News
By TomZ on 9/19/2007 2:01:18 PM , Rating: 1
I guess I didn't get that out of the 4th paragraph.


RE: Good News
By theapparition on 9/19/2007 3:11:25 PM , Rating: 3
Justin, not one point of your reply contradicted any part of mine. I'm not quite sure why you had to justify something. P4 (Netburst) is a resounding success by ANY metric you use to define it.

I did find your original comment pretty funny, though.

quote:
Commercial success is hardly proof of inherent quality or technological superiority

In a business, commercial success is the only thing that matters. AMD would have killed for that level of success.

Why don't you ask 3DFx what its like to have technological superiority?

quote:
In terms of performance it certainly didn't take AMD long to surpass Netburst (Netburst was even slower than the PIII, at first, and the K7 was already faster than the PIII). The only thing that gave Netburst an advantage over the K7 (in a very restricted range of applications) was SSE2.

I suggest you go back and look at benchmarks again. P4 was never slower than P3, maybe IPC, but who cares, the MHz advantage made up for it and then some. All P4's (from Willamette up) competed very favorably to AMD's current offerings. Some wins, some losses for both camps, but overall, until the Athlon64 went up against Prescott, the P4 was very competitive. Lest you not forget the early Athlons that were space heaters without thermal diodes. How many of those chips burned up. Such short memories we have. SSE2 did help intel in some benchmarks, but in others without SSE optimizations, the clockspeed advantage of the P4 gave it the win. Even prescott won a few benchmarks (SSE assisted) against the latest Athlon64's. If you ran that particular application (mostly media encoding) would you still think the design was "inferior"?

quote:
Intel sold more chips because they have better marketing

Your point? Sounds like going the MHz route was the right decision then.

quote:
But the point is that Intel would have sold just as many units (probably more) if they'd gone with an evolutionary, power-efficient design based on the PIII.

Speculation??? In the current climate of MHz wars, that could have been "chink-in-the-armour" that AMD was looking for. AMD chose a complete redesign because they HAD to, not because they are some great saviour helping us from evil, but because that was necessary to compete. And it was a very smart move on AMD's part.

quote:
It's so much simpler to convince people that a chip is faster when it has a big number after the name.

Yep, and once again, your point is???? Wasn't that the reason for the AMD rating.

I'm for competition, and don't have allegiance to either brand. But I don't like mis-information. You obviously no nothing about business, these processor roadmaps are developed years in advance. It takes a lot of time for developement. Sucess in business is also about hitting the market at the right time with the right product. Sometimes companies get it right, other times they don't.

Tell you what, you keep your head in the sand and blindly buy AMD. I'll look at the facts and make the best processor choice from the models that are available.


RE: Good News
By Justin Case on 9/19/2007 3:25:43 PM , Rating: 2
> not one point of your reply contradicted any part of mine.

Not one point of your message had anything to do with the subject of the discussion, and your conclusion was simply wrong. Oregonian2 said that "Intel used to have multiple teams competing on the same project; whomever was best got produced and released and the other group had their work canceled."

And I said "makes you wonder who lost to Netburst" (meaning the Pentium-M, obviously, which got pushed to the background).

You took that as an opportunity to praise the decision to push Netburst as a brilliant move on Intel's part, when it was precisely that move that let AMD catch up (and surpass) Intel in terms of performance and technology (indirectly leading to the death of the IPF), strike previously unthinkable deals with large OEMs, and get a significant chunk of the lucrative server and supercomputing markets. And when Intel finally had to wake up, guess what they used as a starting point for their new CPU generation. The good old Pentium-M.

Netburst the most successful CPU generation in Intel's history? If you think that, then (to quote your own post) "You obviously no nothing about business". [sic]


RE: Good News
By theapparition on 9/19/2007 4:07:20 PM , Rating: 2
quote:
And I said "makes you wonder who lost to Netburst" (meaning the Pentium-M, obviously, which got pushed to the background).

And I replied that I wouldn't be ashamed to lose to the team that came out with the most successful processor in Intel's history. There's something seriously wrong with you if you can't understand that.

quote:
Netburst the most successful CPU generation in Intel's history? If you think that, then (to quote your own post) "You obviously no nothing about business".

I define successful as the processor that sold the most in Intel's history and generated the most CPU profit in their history. In fact, P4 has the largest user base of ANY CPU processor. How in the hell do you define success?

Name me one processor that has sold as well???
I won't hold my breath waiting.

quote:
You took that as an opportunity to praise the decision to push Netburst as a brilliant move on Intel's part, when it was precisely that move that let AMD catch up (and surpass) Intel in terms of performance and technology

AMD only surpassed netburst in the end. Everyone seems to forget this. I personally think intel held on to the design to long. You usually see this with some companies, they get lazy. Because the existing design is so successful. Get it.

I'm not trying to argue which is better, but to say the Netburst was a flop is just plain wrong.

quote:
get a significant chunk of the lucrative server and supercomputing markets.

AMD will still have that because of the superiour scaling advantage due to the IMC, and that's the only reason they are enjoying the server market. Pentium-M wouldn't have helped them there, as you suggest.


RE: Good News
By Justin Case on 9/19/2007 5:03:34 PM , Rating: 2
You seem to think Intel's success is measured by how many processors they sell in abstract. It's not. It's measured by how many processors they sell relative to their competitors.

Intel sold a lot of "Pentium 4" units (which was the brand name given to several different CPUs, by the way) because they had that particular product in the market for a long time, because they spent a lot of money marketing it (sometimes with claims that made Apple sound grounded), and because there were a lot of people in the world buying computers.

They still lost market share (and, in many markets, exclusivity) to their rivals. And that is how one measures the commercial success of a product: was the company in a better position before or after betting on that product? If the market grows by 40% and a company grows by 20%, that's not a success.

The K7 and K8 were great products, but if Intel had kept doing what they do best (letting technology drive their business) instead of massively screwing up with Netburst, we'd still be wondering if Dell was ever going to buy AMD chips, and companies needing 64-bit systems would be buying Itaniums.

Your attempt to shoot off in multiple directions with this post doesn't invalidate the fact that your original claim was complete nonsense. The idea that Netburst was Intel's "most successful product ever" is just too ridiculous to waste any more of my time with.


RE: Good News
By TomZ on 9/19/2007 5:55:30 PM , Rating: 2
quote:
You seem to think Intel's success is measured by how many processors they sell in abstract. It's not. It's measured by how many processors they sell relative to their competitors.

That definition of success is just as valid as any number of other definitions of success for Intel. And if you look at what Intel shareholders really want - increasing share price and/or dividends - the number of processors they sell relative to competitors may or may not be important in reaching that goal.


RE: Good News
By Justin Case on 9/19/2007 9:52:54 PM , Rating: 2
Apparently you don't understand how the stock market works, or what Intel's business consists of, so let's see if I can explain it succintly:

Ignoring speculation, global economic trends, etc. (which are external factors affecting the stock market), Intel's share price is still influenced by all its businesses, of which x86 CPUs are merely one part. Intel also makes chipsets, network and controller chips, embedded processors, flash memory, GPUs, and so on.

In other words, you cannot use share price as an indicator of the success of a particular x86 CPU model or family. To judge that, you look at how well that CPU (or any other product) did compared to competing products during its commercial lifespan. That's what the concept of "market share" means.

I also register the fact that you're "unsure" as to whether or not x86 CPU market share is a relevant factor in Intel's success. I can picture the scene right now: Intel executive slaps his forehead and says "I just had this brilliant idea! Maybe if we lowered our market share we'd be more successful. Bring back Prescott immediately!"

Sigh...


RE: Good News
By TomZ on 9/19/2007 10:06:24 PM , Rating: 2
LOL, you are a real piece of work. Nice chatting with you. Over and out.


RE: Good News
By theapparition on 9/20/2007 8:09:49 AM , Rating: 2
You could argue all day, but some people are still fanboy morons. :-)


RE: Good News
By TheGreek on 9/24/07, Rating: -1
RE: Good News
By StevoLincolnite on 9/20/2007 5:48:10 AM , Rating: 2
Yes Intel did sell allot of processors, and yes they were in the exact same position as the Athlon now.
They had they're prices lowered in order to sell more in volume, just like AMD is doing now to try and steal Intels thunder.


RE: Good News
By theapparition on 9/20/2007 9:19:52 AM , Rating: 2
And how's those low prices going for AMD?


RE: Good News
By StevoLincolnite on 9/20/2007 5:44:43 AM , Rating: 2
quote:
Justin, not one point of your reply contradicted any part of mine. I'm not quite sure why you had to justify something. P4 (Netburst) is a resounding success by ANY metric you use to define it.


Yes the Pentium 4/D/Netburst was a success in terms of sales, and market penetration, but it was hardly the market leader in terms of performance once the K8 burst onto the scene, and because you seems to like the quotes I thought I might as well give it a try also.

quote:
In a business, commercial success is the only thing that matters. AMD would have killed for that level of success, Why don't you ask 3DFx what its like to have technological superiority?


The only feature worth of mention that was compatible with most games was they're Anti-Aliasing most other features went mainly un-used, Besides it lacked TnL unlike the Geforce 2 at the time, and the Geforce 2 as well as the Geforce 3 were faster in most games anyway.

quote:
I suggest you go back and look at benchmarks again. P4 was never slower than P3, maybe IPC, but who cares, the MHz advantage made up for it and then some. All P4's (from Willamette up) competed very favorably to AMD's current offerings. Some wins, some losses for both camps, but overall, until the Athlon64 went up against Prescott, the P4 was very competitive. Lest you not forget the early Athlons that were space heaters without thermal diodes. How many of those chips burned up. Such short memories we have. SSE2 did help intel in some benchmarks, but in others without SSE optimizations, the clockspeed advantage of the P4 gave it the win. Even prescott won a few benchmarks (SSE assisted) against the latest Athlon64's. If you ran that particular application (mostly media encoding) would you still think the design was "inferior"?


The Tualatin Pentium 3 was far more powerful than the Pentium 4 in most situations, that was with the willamate, it was not till about the 1.7-1.8ghz Pentium 4 Willamate' that the Pentium 4 could finally beat the Tualatin 512k 1.4ghz chip.
Not to mention that the Pentium 4 was beaten even by the Duron early on in the days, and the fact that the early Pentium 4 boards only support RDRAM then moved to the much slower SDRAM then eventually DDR didn't help things either.
Besides, one of the reasons why the Pentium 4 couldn't compete with the Pentium 3 Tualatin, was because of the lack of software that could take advantage of the SSE2 instruction set.

quote:
Your point? Sounds like going the MHz route was the right decision then.

Depends which was you look at it, if you are a hardcore gamer, then Intel's Decision was not the best.

quote:
Speculation??? In the current climate of MHz wars, that could have been "chink-in-the-armour" that AMD was looking for. AMD chose a complete redesign because they HAD to, not because they are some great saviour helping us from evil, but because that was necessary to compete. And it was a very smart move on AMD's part.


Thats very true, and the Athlon 64 is still a very good processor, and only getting better as the prices continue to fall.
I personally wouldn't buy a Presshot, A Northwood... Maybe.

quote:
Yep, and once again, your point is???? Wasn't that the reason for the AMD rating.


Yes but PC manufacturers still listed the actual "Ghz" of the processor, just like with the Core 2 series, I know someone that thought his Pentium 4 3.2ghz was faster than a Core 2 Duo, because it had a higher clockspeed.

quote:
I'm for competition, and don't have allegiance to either brand. But I don't like mis-information. You obviously no nothing about business, these processor roadmaps are developed years in advance. It takes a lot of time for developement. Sucess in business is also about hitting the market at the right time with the right product. Sometimes companies get it right, other times they don't.


Yes they are made years in advanced, the Pentium 4 went into the design stages during the Pentium 2's life span.
I also don't have a favorite brand, I liked the Pentium 3 over the Athlon when Intel moved the cache on Die.
Then I like the Athlon XP and 64 over the Northwood because of power consumption, heat and performance, and now I enjoy both the Athlon 64 X2 because of its price, and the Core 2 series for its performance.
But comon, everyone thats a Computer Enthusiast knows that the Pentium 4 sucked big time.

quote:
Tell you what, you keep your head in the sand and blindly buy AMD. I'll look at the facts and make the best processor choice from the models that are available.

And you can keep buying the Intel processors (Even though they are great, as is AMD's Athlon 64 X2 because of its price).


RE: Good News
By theapparition on 9/20/2007 9:18:46 AM , Rating: 2
quote:
and because you seems to like the quotes I thought I might as well give it a try also.

I love them. Makes it easier to pick apart :P

quote:
Yes the Pentium 4/D/Netburst was a success in terms of sales, and market penetration, but it was hardly the market leader in terms of performance once the K8 burst onto the scene,

Technical leadership does not ensure success of a product. There are many instances where a superior design has lost to an inferior one, due to all kinds of market influences. We could go on for hours here talking about failed products that were really good. 3DFx had a line of Voodoo cards (5500 & 6500 IRC) that were very fast but never released. NVidia was embroiled in a lawsuit with 3DFx at the time and it was alleged that NVidia stole the tech. In the end, NVidia purchased the assets of 3DFx and the lawsuit was gone. The 5500 showed real promise as the fastest card against the RivaTNT. LOL, your comparing them to the Geforce2 and Geforce 3, that hadn't been released until a year after 3DFx closed shop!

quote:
The Tualatin Pentium 3 was far more powerful than the Pentium 4 in most situations, that was with the willamate, it was not till about the 1.7-1.8ghz Pentium 4 Willamate' that the Pentium 4 could finally beat the Tualatin 512k 1.4ghz chip.

Tualatin was a great chip, but check your timeline. Tualatin never got to 1.4GHz during Willamette, so compare apples to apples (or in this case Tualatin to Northwood).

quote:
Depends which was you look at it, if you are a hardcore gamer, then Intel's Decision was not the best.

Yes, and if you were a media encoder you'd probably want a prescott over Athlon64 since they consistantly outperformed all but the latest Atlons in certain benchmarks. Fail to see what your getting at. I originally stated that the benchmarks were win some/lose some, based off of what your most important application was. Athlons were better at floating point/games, Intels better at multimedia. Take off the blinders and realize what is best for you is not what is best for the market as a whole.

quote:
Yes but PC manufacturers still listed the actual "Ghz" of the processor, just like with the Core 2 series, I know someone that thought his Pentium 4 3.2ghz was faster than a Core 2 Duo, because it had a higher clockspeed.

You've just agreed with my point. Was this statement trying to contradict me?

quote:
But comon, everyone thats a Computer Enthusiast knows that the Pentium 4 sucked big time.

This is what I've been rallying against the whole time. The pentium 4 did not "suck". Northwood was a great core, Intel just milked it too long. You see they learnd their mistake with their new "Tick-Tock" approach. Northwood was far and above Athlons; AthlonXP's got on par with it. They flat out got beat by the Athlon64, and it took an IMC to do it. Prescott was a disaster (IMO), but it was still successful in the market, I can't take that away from it. How can any rational person justify that a complete product line was garbage because they got beat by a better product. You can say intels management blew it by not planning better, but you can't say the Pentium 4 "sucked". It still remains the best selling chip ever.

But the best part is when I made the statment "Tell you what, you keep your head in the sand and blindly buy AMD. I'll look at the facts and make the best processor choice from the models that are available."
And your reply
quote:
And you can keep buying the Intel processors (Even though they are great, as is AMD's Athlon 64 X2 because of its price).

LOL, was this a bad joke or a poor insult? Not sure, but it still makes me laugh.
I buy what works the best for my organization and personally. I've had both, and will continue to buy whatever suits my needs.

It's been fun.


RE: Good News
By StevoLincolnite on 9/20/2007 9:46:58 AM , Rating: 2
quote:
Technical leadership does not ensure success of a product. There are many instances where a superior design has lost to an inferior one, due to all kinds of market influences. We could go on for hours here talking about failed products that were really good. 3DFx had a line of Voodoo cards (5500 & 6500 IRC) that were very fast but never released. NVidia was embroiled in a lawsuit with 3DFx at the time and it was alleged that NVidia stole the tech. In the end, NVidia purchased the assets of 3DFx and the lawsuit was gone. The 5500 showed real promise as the fastest card against the RivaTNT. LOL, your comparing them to the Geforce2 and Geforce 3, that hadn't been released until a year after 3DFx closed shop!


Errm, what have you been smoking I want some!
The 3DFX cards goes as follows:
Voodoo (Positioned against the Riva 128)
Voodoo Rush
Voodoo 2 (Positioned against the TNT)
Voodoo 2 SLI (It came after the Voodoo 2).
Voodoo Banshee
Voodoo 3 1000, 2000, 3000 and 3500.(Which was against the TNT 2)
Voodoo 4 4500 was positioned against the Geforce 256, and was classed as a budget card, and used the same VSA100 (Voodoo Scalable Architecture) that was used in the Voodoo 5 series.
Voodoo 5 5000, 5500 Mid-ranged/High-end products, they were positioned against the Geforce 2 MX and Geforce 2 GTS (Giga Texel Shader).

3DFX were making 3D accelerators in 1994, and went bankrupt in 2002.

And it was not because nVidia "Stole Technology" it was the other way around, 3dfx used "Single Pass Multi-Texturing" in the Voodoo 2, which was a patent that nVidia owned, then later on down the line when the Banshee was released Single Pass Multi-Texturing was also removed.

And the card you were referring to that was never released was the Voodoo 6 6000 Which used an updated VSA100 core, TnL support and all the jazz, which at the time would have left nVidia's Geforce series spluttering in the dust, in terms of image quality, although there were some working revisions of the cards with lowered clock speeds, and benchmarks shown it was on par with the Geforce 3 Ti500, although that was used with immature drivers, it does give a good idea on the kind of performance it did offer, and after the nVidia bought 3dfx the 3dfx team went on to design the Geforce FX series, which as you know sucked.

quote:
Tualatin was a great chip, but check your timeline. Tualatin never got to 1.4GHz during Willamette, so compare apples to apples (or in this case Tualatin to Northwood).


My timeline is fine, the Tualatin was continued till 2003, and the Willamate was released in 2000.
And you are wrong about there not being a 1.4ghz Tualtin processor, I have one sitting in a lan box, and it sits overclocked running at 1.8ghz with ease.

http://en.wikipedia.org/wiki/Pentium_III
http://en.wikipedia.org/wiki/Pentium_4

Willamate was also out before the northwood, northwood was the refresher that hit 3.2ghz then the presshot took over.

quote:
This is what I've been rallying against the whole time. The pentium 4 did not "suck". Northwood was a great core, Intel just milked it too long.


And I said "Computer Enthusiest" most computer enthusiast chose the Athlon over the Pentium, why? because of performance.
And yes they did milk it to long, but the Pentium 4 technology did not goto waste, it lived on in the Pentium M, the Pentium M was based upon the Pentium 3 Tualatin, But it used the Pentium 4's Advanced Tree predictor, and Front side bus, and also improved larger cache, the result? A huge IPC boost. A Pentium M running at 1.6ghz can beat out a Pentium 4 2.4ghz, my Pentium M 1.6ghz @ 2.2ghz can out perform a 3.4ghz Pentium 4.

You do have some good knowledge, but when it comes to older hardware like the Pentium 3 and the Voodoo cards, I'm sorry... but you were just plain wrong and in-accurate.


RE: Good News
By maroon1 on 9/20/2007 8:57:42 AM , Rating: 2
quote:
(Netburst was even slower than the PIII, at first, and the K7 was already faster than the PIII)


At the same clock speed PIII is faster than K7

Here are the benchmarks for you
http://techreport.com/articles.x/2784/6

Pentium 3 1.2Ghz (Tualatin) was faster than AMD Athlon 1.2GHz (Thunderbird)


RE: Good News
By sinful on 9/22/2007 3:05:14 PM , Rating: 2
You assume that the Pentium M would be as good as it was without the existence of what Netburst pioneered. That's a false assumption.

In other words, Intel could have came out with the Pentium M first, and it wouldn't have ramped to decent clock speeds.
It might have been efficient, but dog slow overall - resulting in a massive flop.

Here's a wild idea, maybe the Pentium M wasn't ready for prime time at the time, and Intel went with the best solution. Maybe they were having problems with the Pentium M ramping up in clock speed, and they needed to solve that problem first in order for the Pentium M to be successful.

You know, people act like ramping up the clock speed is like turning a knob. It isn't. With it, there are a LOT of major technical hurdles to overcome (keeping the CPU fed with data, etc).

Intel pioneered totally new ground with the Netburst. Name one other architecture that even *approaches* the clock speed of the what Netburst achieved. You can't. Gee, if it was just an issue of marketing, don't you think IBM or AMD would have done so as well? Oh, that's right, IBM and AMD have had SERIOUS issues ramping clock speed up.

This should be obvious when AMD releases a new chip that's 50-200Mhz faster than their previous chip. Hint: If they can only make a 50Mhz improvement in clock speed, they're having major issues with clock speed holding them back.

And again, take a look at where the G4 and G5 failed. It isn't because they weren't *efficient* enough, it is because they didn't scale well in **clock speed**.

In other words, Intel solved a massive hurdle with Netburst that was holding ALL their competition back - and still is.

The fact that AMD & IBM aren't able to crank up their clock speed pretty much says that it's a major technical hurdle they haven't been able to solve - that Intel did.

In other words, the Pentium M's success might be directly related to the fact that Intel went
"Gee, Netburst solved the problem with X really well, but feature Y needs improvement. Our next design needs to incorporate X, and improve upon Y...."

Meanwhile, AMD is (still) going "Our Y feature is really great, but improving X is a massive challenge. Man, if only we could solve X....".

Considering Intel chose what they considered to be the BEST of the two designs, this probably represents the most likely situation.

In short, Netburst achieved something no other architecture has done (and still hasn't), and was a runaway commercial success.
What a flop, eh?


RE: Good News
By Marcus Pollice on 9/18/2007 2:16:08 PM , Rating: 2
Since they started developing the Griffin core I wouldn't be too sure about AMD only having one design team.


RE: Good News
By Justin Case on 9/18/2007 7:44:44 PM , Rating: 3
I believe they have two (for CPUs). One working on the "Bulldozer" family and one working on "Griffin". I suspect part of the Griffin team came from ATI, BTW, and it's being developed along with a dedicated mobile chipset.

At this time Intel probably has 3 or 4 CPU teams working in parallel, and they have much easier access to the production lines. One of the reasons why Barcelona was late was that AMD's management gave priority to Athlon X2 production, and Barcelona samples were delayed.


RE: Good News
By erikejw on 9/18/2007 10:55:56 PM , Rating: 2
quote:
AMD only has one design team. Don't quote me on that though.


You sure about that :)))


RE: Good News
By Sunrise089 on 9/18/2007 1:28:26 PM , Rating: 2
I'd wager money Intel won't keep it up much longer unless AMD vastly exceeds present expectations of processor performance.

Nothing against Intel, but I'm sure this sort of development schedule costs billions. Since it appears keeping Conroe, much less Peryn and now this, would be sufficient to battle AMD if Intel would sell chips at speeds the OC community can reach with lower-end components then there really isn't a reason long term to replace a very successful architecture after only 2 years.


RE: Good News
By ChristopherO on 9/18/2007 1:48:07 PM , Rating: 2
quote:
Nothing against Intel, but I'm sure this sort of development schedule costs billions

I'm sure it does, but it isn't as bad as you would think. This is the same model ATI/Nvidia employ for doing their cores. They have a team focusing on incremental improvement, and then "something new".

However "something new" is really only a moderate step-up compared to the huge generational leaps in the past.

This really is the best way to develop CPUs. It keeps the market current with moderate performance boosts, while staying nimble so that they can address looming issues much quicker (power, heat, etc).

And besides, it has worked beautifully for them without the risk of going horribly off-course (Itanium).

The Pentium M is a revised version of the P3, the Core a revised Pentium M, the Core 2, a revised Core, and the Nehalem is a revised Core 2. Basically they completely scrapped the P4 generation and retrenched with something older and fundamentally more efficient.


RE: Good News
By Operandi on 9/18/2007 2:36:26 PM , Rating: 2
I agree but what are people going to do with all this power.

A product can only be successful if there is a demand for it.


RE: Good News
By peldor on 9/18/2007 2:42:41 PM , Rating: 5
You know, there's only a worldwide market for maybe 4 computers.


RE: Good News
By lumbergeek on 9/18/2007 3:15:09 PM , Rating: 2
LOL!


RE: Good News
By retrospooty on 9/18/2007 5:24:45 PM , Rating: 2
"You know, there's only a worldwide market for maybe 4 computers."

I vote this is the post of the day... Congrats ! LOL


RE: Good News
By Justin Case on 9/18/2007 7:57:56 PM , Rating: 1
Simple: Microsoft will make sure that Windows Vista SP2 will need at least 8 cores at 3 GHz just to run Notepad.

Seriously, now, some people do need all the power they can get (think FEA, 3D rendering, etc.), but the huge increase of "consumer-level" CPU power has led to progressively sloppier and less optimized code.

Since manufacturing slower CPUs isn't particulary cheaper than manufacturing fast ones, the trend is likely to continue.

On the plus side, shifting the focus from GHz to multi-core will force programmers to start thinking a bit harder about their code, instead of just relying on next year's CPU speed-ups. On the other hand, multi-threading opens the door to a vast army of brand new bugs that single-threaded applications never had to worry about (and which many programmers don't have a clue about).


RE: Good News
By TomZ on 9/18/2007 8:07:01 PM , Rating: 2
quote:
but the huge increase of "consumer-level" CPU power has led to progressively sloppier and less optimized code

I'd like to see you prove that statement. Sounds like something you just pulled out of your @ss.


RE: Good News
By Justin Case on 9/18/2007 9:22:42 PM , Rating: 1
So now you feel the need to "attack" every message I post, Tom? Lots of free time, it seems.

I can't post any code from commercial products for obvious copyright reasons (and posting code written by me would obviously not be reliable). I could tell you to look at Mozilla's code (some parts of it will make even a Netscape programmer cry), but a) I doubt you'd understand it and b) I'd get 500 posts from FF-pseudo-geeks saying I'm just a Microsoft fanboy.

So I'll just point out a couple of things that you can investigate in whichever software package you want:

1. How much of the code was written in a high-level language and how much was written in a low-level language (i.e., assembly)? How much of it is actually script-driven and not compiled at all? Compare that with software written 10 or 20 years ago.

2. Compare the relative speed of current software to equivalent software written 10 or 20 years ago, and divide it by the raw performance difference of current CPUs, relative to 10- or 20-year old CPUs. Even without taking superscalar designs, out-of-order optimizations, and compiler improvements into account, things should be at least 200 times faster today.

3. Remember when John Carmack seemed to be the only person on Earth capable of writing a 3D engine with decent (i.e., over 30) fps? Then all of a sudden id's 3D engines were delivering over 200 fps, and all the other (previously unplayable) engines were delivering 50 or 60. Do you think Carmack (and everyone else) suddenly got 8x better at coding? Since 200 fps don't give any obvious advantage over 60 (most LCDs are limited to 60 fps anyway), suddenly any 3D engine capable of reaching that minimum level of quality on modern - much faster - hardware was indistinguishable from the good ones. Result? The proliferation of poorly optimized engines.

I could give you more examples, but my original statement is pretty obvious (faster hardware disguises poor coding and makes previously "unacceptable" software acceptable - duh!), and I don't think you actually wanted a real answer anyway...


RE: Good News
By TomZ on 9/18/2007 10:53:16 PM , Rating: 2
I tried to address your post with my unfortunately lengthy reply below, but I'm out of time, so I'll just give you a few thoughts.

First, the current percentage of assembly in the typical software package today is probably nearly zero, as it should be for reasons I describe in my other post.

Regarding your 200X claim, you are failing to take into account the principle of diminishing marginal returns, plus the additional functionality that's been added. For example, how much faster do you expect Word to be, today compared to 10 years ago?

Regarding FPS engines, I can't really address that, since I don't know much about them. I agree, however, that good optimization is probably needed there.

Finally, yes, I did want a real answer, since I wanted to discuss the old "bloatware" stereotype which non-software specialists perceive exist.


RE: Good News
By Justin Case on 9/19/2007 2:51:03 AM , Rating: 2
Assembly optimizations for a word processor would be nonsensical, of course. The speed at which MS Word runs is irrelevant; it is hardly a CPU-intensive application, is it?

Take, for example, operating systems, drivers and services. While the speed of modern CPUs is enough to make any individual component seem "fast enough", even when written entirely in a high-level language with tons of layers of unnecessary red tape, the same is not true when you put everything together and try to actually run an OS with 50 services and 20 drivers in a production environment. The impact from all those little "it's good enough and we can release it sooner" decisions adds up, and all of a sudden you run out of CPU cycles.

Solution? Buy a faster CPU, of course. Most people don't even understand that it's possible for two programs to do the same thing at radically different speeds on the same system; they think speed depends exclusively on the hardware.

It's not even a matter of assembly optimizations, it's a matter of programmers understanding how CPUs work. For example, in the olden days it was often necessary to multiply numbers by 320 to find the offset of a pixel on the screen (320x200 or 320x240 mode). You could do it this way:

offset = screen + (320*y) + x

Which is straightforward but (comparatively) slow. Or you could think about it for a second, notice that 320 is 256+64, that both 256 and 64 are powers of two (2^8 and 2^5), and that computers can do bit shifts and adds much faster than they can do multiplication. So, instead, you could do it like this:

offset = screen + (y<<8) + (y<<5) + x

Which looks cryptic but executes much faster. And since this is going to get executed about 64 thousand times per frame, even a small difference would have a major impact on the code's performance. It can be the difference between frames per second and seconds per frame.

Or, if you didn't mind using a bit more RAM, you could simply pre-generate a table with the offset of the first pixel of every line and then do:

offset = lineoffset[y] + x

Where lineoffset[y] is obtained simply by loading the value at address (lineoffset_table+y).

Nowadays, even if you use the multiplication, the code will run fast enough to let you fill a high definition screen hundreds of times per second. But when you add physics, AI, and so on (plus a ton of background services and drivers), suddenly you reach point where some CPUs won't be able to handle it. And they would have handled it just fine if the programmer had been a bit smarter and a bit more "in touch" with the machine's architecture.

Of course, all of the above would be done on the GPU, these days, but you get the point.

The benefits of knowing assembly (and knowing the architecture you're working with) aren't limited to actually coding in assembly.


RE: Good News
By boogle on 9/19/2007 5:13:51 AM , Rating: 3
Just to throw some more wood on the fire - shifting is no longer neccesary since simple mathematical operations execute in a single cycle now - the two shifts can potentially be slower than the original multiplication. The lookup table could potentially be slower than both methods if the array is in system memory, the latency of system memory as we know - is astronomical.

I agree with you on principle with your optimisations, but premature optimisations just make for difficult to read code that isn't neccesarily faster. IMO it's better to write clean, right code - and then when you've got most of your functionality in place, profile the app. Sadly in current development it seems that as soon as the application 'works' it's released. Instead a significant amount of time should be expended profiling and debugging.

Even Carmack's engines aren't 100% assembly, assembly isn't neccesarily a golden key. I've seen compiler-produced code outperform hand assembly simply because the compiler sees the 'bigger picture' and knows more about the underlying hardware. The important inner-loops were assembly, but the game code itself was mostly C. The Quake engines are all gloriously open source, so feel free to have a look.

Should more time be spent optimising? Sure! But then again - does it really matter? Computers are so cheap now that I can buy 3 or 4 for the price of a single PC 15 years. I can also tell you that they'll all be faster with current hardware than that 15 yr old PC was with it's current hardware.

I would rather have a nice selection of useful moderately-performing software than a small selection of super-fast software that doesn't do everything I want it to.


RE: Good News
By boogle on 9/19/2007 5:16:35 AM , Rating: 2
Sorry I meant current software not current hardware.


RE: Good News
By Justin Case on 9/19/2007 1:22:05 PM , Rating: 2
Yes, I know muls are single-cycle, now (in fact, you can even do fused ops), I was using that example because (back then) it was the difference between "acceptable" and "unacceptable" software. Nowadays it's very hard to write software that's "unacceptably" slow when it's running by itself.

But as you try to do more things at the same time, you will run out of CPU power, and that extra performance you might have gained from a bit of "manual" optimization can be the difference between being able to run things on one box or having to split the load across two or more, with all the extra complexity that implies.

A lot of optimizations will come as second nature to coders who are used to them. But if they never have any incentive to "think like the machine", they'll never gain that experience. Getting it right the first time can take a lot of effort, but when you apply that knowledge hundreds of times, that effort is diluted, and the benefits outweigh it.

I never said id's engines were 100% assembly; that would have been pointless. In fact, I wrote that the issue isn't writing things in assembly (though that is certainly useful, sometimes). The issue is knowing the dirty details of the underlying architecture. In the example above I was contrasting the execution time of a multiplication (in the 386 days) with that of two shifts and one add; not the relative speed of compiled C (or whatever) and straight assembly.

I haven't done any low-level coding in 8 years or so and am not familiar (for example) with the execution times of the latest SIMD instructions or the advantage of using the new x86-64 ops. But I have seen how manually tuned implementations of some common algorithms (MD5, etc.) can be more than twice as fast as "vanilla" ones produced by modern compilers (which are very smart, but not perfect - just ask any IA64 coder ;).

I find it a bit contradicting that you say "Should more time be spent optimising? Sure!" and then "But then again - does it really matter?". If it doesn't matter, why the "Sure!"?

To me, it's partly a question of "natural selection". Evolution through natural selection only works when the unfit are culled out before they can reproduce. Once you have a system in place that makes quality irrelevant or indistinguishable, it doesn't make any sense to invest in quality any more (beyond a bare minimum). And while that might be irrelevant to some market segments (again, the fast food metaphor), the lowering of standards and lowering of competition means that the cutting edge isn't pushed forward as strongly as it could be.

There are lots of good examples of this in web standards, where a "good enough" solution is rushed out of the door by some eager vendor and ends up killing any chance of having a "perfect" solution adopted. You save a couple of weeks in the initial spec, and then lose an extra hour a week for the rest of your working life.

Programming is like sex: when you make a mistake, you spend the rest of your life supporting it. ;) In fact, it's worse than that; if some big company makes a (deliberate) compromise, everyone has to spend the rest of their lives supporting it.

P.S. - Code isn't meant to be easily readable; that's what the comments are there for. ;)


RE: Good News
By TomZ on 9/19/2007 1:35:31 PM , Rating: 3
Reading your post, I think you are expressing the view that all software should be coded to very high standards. I disagree to an extent. I believe what is more important is that the software's desired characteristics be stated up front, measured, and enforced. For example, if run-time performance (speed) is important, then that is stated as a requirement, otherwise the dev is free to save some time and not spend time measuring and optimzing performance. Similarly, if the objective is to very quickly put together a prototype that will be discarded, then there's no point in designing the code to make it maintainable.

I think you probably realize all this, but I get the impression you're focusing on just a single aspect of programming competence and ignoring the fact that different programs have different objectives in this sense. A good software engineer recognizes when different development styles should be applied.


RE: Good News
By Justin Case on 9/19/2007 3:11:39 PM , Rating: 2
These days a good software engineer is quite likely to be fired and replaced by a mediocre one that will work for less money, so when the time comes to make that decision, he's not around.

There is a place for fast food. But when (uninformed and easily manipulated) consumers don't even understand that it's possible to do better, you're on a slippery slope towards a nation of fat bastards.

So what's the solution when people become so obese they can't even get into their cars? Bigger cars, of course!


RE: Good News
By TomZ on 9/19/2007 3:33:05 PM , Rating: 2
You have a pretty simple, flat view of the world. That's all I can conclude.


RE: Good News
By Justin Case on 9/19/2007 3:36:00 PM , Rating: 2
It wouldn't be the first time you got a conclusion dead wrong.

My view of the word definitely isn't simple. It is, however, quite cynical. And that comes from having lived in it for so long.


RE: Good News
By Justin Case on 9/19/2007 2:56:23 PM , Rating: 2
Just to add something that I forgot to mention above: the concept of "optimization" obviously goes beyond coding itself and into design. Options like "do we maintain an index or do a search when necessary?", and so on, can have a huge impact on the software's scalability.

As always, faster hardware can disguise it, but as the amount of data grows you might find you're locked into an inefficient design because you decided to save a couple of weeks during development.

Back when hardware was slower, the limitations of poor design became apparent earlier, and there was a greater incentive to get things right from the start.

Again, this is related to the lack of long-term vision that most humans have (often hurting themselves in the long run in exchange for less work or some small short-term reward).

As they say, it's very hard to talk a 20 year old man into giving up beer to save the liver of a 50 year old man he's never met. The "smart" decision now might turn out to be different from the "intelligent" decision when you add it all up.


RE: Good News
By TomZ on 9/19/2007 8:53:52 AM , Rating: 3
Justin, the kinds of optimizations you describe made sense in 1990, but not today - you're living in the past. The compiler already performs these optimizations and tons more that you never heard of. It is generally impractical for application programmers to develop a deep enough knowledge of the instruction set architecture for modern CPUs in order to exceed the capabilities of today's highly optimized compilers.

Even embedded systems compilers, which have always trailed PC compilers, perform these types of optimizations already.

Actually if you write code like you describe, you might thwart the built-in optimization and actually slow your code down.


RE: Good News
By Justin Case on 9/19/2007 3:46:02 PM , Rating: 2
That's because the optimization I mentioned was from 1990. Duh! That was the idea: give a real-world example of something that made a difference between "acceptable" and "unacceptable" performance.

And my whole point was that, with faster hardware, it is now harder for people to make such a direct connection between software efficiency and "acceptability", and that leads to poorer software being considered acceptable (which was my original point, that you disputed).

Now, a separate issue is whether that (a general lowering of coding and software design standards) is a bad thing or not. You seem to think it isn't (in fact, you seem to think it's a good thing), I happen to think it is.


RE: Good News
By TomZ on 9/19/2007 3:57:56 PM , Rating: 2
You're conclusion is wrong (see I can play that game too). I'm not advocating the lowering of standards. I'm saying that you need to put in just the right level of quality in the software. If you put in too much, you are wasting resources. If you put in too little, well you know the consequences of that.

I understand your views - I had the same views 10 years ago when I was a software engineer. But since then I've become an engineering manager, then director, and now run an engineering company. The idea of "maximize quality at all costs" is long gone in my mind, I can assure you.


RE: Good News
By Justin Case on 9/19/2007 4:44:46 PM , Rating: 2
You can try to spin it any way you want, but putting in "just the right level" of effort to meet short-term expectations, versus maximizing quality to push the envelope is "lowering the standards". That's kind of the definition of the words. One standard is lower than the other, see?

Voyager 1 was terribly over-engineered. But considering they got a lot more from it than they could possibly have hoped for at first, I bet the guys at NASA don't regret that "managerial mistake" from their predecessors.

I can understand the point of view of managers, trying to maximize their short-term profits, of course (this can be even more true with companies owned by shareholders). But that approach is nearly always bad for consumers and sometimes bad for the managers themselves. Maybe you'll figure that out given another 10 years. Or maybe not. Most people don't.


RE: Good News
By boogle on 9/20/2007 5:29:19 AM , Rating: 2
Let's put this another way: Microsoft could have spent say, 10 years ensuring Windows 3.1 was 100% stable, reliable, and 100% optimised. Let's also assume no other companies came along and released something better, cheaper (10yrs R&D isn't cheap), and faster.

We would be sitting around enjoying our lovely 16bit apps, wouldn't have nice multitasking, movie effects would still be Star Wars-esque, and so on.

Sometimes it's better to get the functionality in place. It's easier & cheaper to make a faster CPU than heavily optimise code.

Is this the right way? I'm not qualified to make that call, and neither are you. We live in a capitalist world, and in that world the consumer talks. The consumer has said they want lots of features and cool multimedia - and so that's what we have. I for one like it, since I like HD movies, lots of 3D games, and so on. I don't want to have a 100% reliable, super-slick Windows 3.1.

Even in a communist world, technology advances very quickly, beyond even what's safe. Just look at the soviet space program and their nuclear subs. But in the communist world the Lada reigned supreme due to no competition. It was poorly engineered, used components that can't be recycled, was heavily polluting, and a nasty piece of work.

If you want supremely realtime systems - work in the military. The rest of us can remain in the consumer market where functionality and price is more important than engineering.

Voyager 1's over-engineering was useful because it couldn't be repaired or replaced. In the consumer market, things can be replaced cheap, and/or patched. As I said before I can buy 3 or 4 PCs now for the price of 1 a little over a decade ago. Each of those PCs runs current software faster than the older PC could run it's current software - regardless of optimisations things are getting faster.

I can understand you're in your engineering bubble, but you need to see the bigger picture. I'm a software developer, but I always put the client's needs as my top priority, not some insistance that it's so optimised that it takes me 5 years to do a 1 month project.

I've seen optimisations come back and slow the app down, simply because modern CPUs are radically different from the old-style CPUs. Without profiling I have no idea how fast a piece of code will actually run. My strategy has changed to something quite simple: Optimise loops (ie. not nested), optimise branches (if/switch), use postfix and prefix wisely, and profile. You don't even have to be careful with floating point values any more, go nuts! The only thing slow with floats are the conversion to and from integers. A nice side-effect of using floats with complex calculations is the compiler helpfully uses SIMD instructions dramatically increasing performance. With the upcoming Barcelona and Intel cores, unaligned SIMD instructions will run at almost full-speed meaning optimisations like this can give massive boosts to performance.

But I say 'massive' - what is the actual impact? Going by previous advances, you're only going to get ~3% more frames. I've said it once, and I'll say it again, premature optimisation kills performance and time. PROFILING is the only thing that will tell you were things slow down. Half the time the slow areas are I/O operations that can only be 'masked' if you use multithreading and can carry on executing while the IO operation takes it's time.

Next time an app runs slow for you check your available memory and HD light. You'll find it's not a poorly optimisation application, it's a lack of RAM, or some HD thrashing. This is especially true if as you repeatedly say - multiple apps is what slows your PC down. Windows may be multitasking, but most of the time only 1 application is actually using a lot of CPU cycles, and when that app slows down, look for a lack of RAM and/or HD activity.

I'm sure you think I'm talking rubbish. But then again - you've only said that optimisations need to be done (without having seen the code) and mentioned a few 1990s optimisations which are slower now than no optimisations. Which implies to me at least, that optimising for long-term gains works against you.


RE: Good News
By Phynaz on 9/18/2007 9:29:25 PM , Rating: 2
He's right, Tom.

When was the last time you heard of someone hand tuning assembly code?

It was a common practice not very long ago.


RE: Good News
By Justin Case on 9/18/2007 10:23:34 PM , Rating: 2
Hell, I coded some programs (image viewers, graphics filters, even some basic AI modules) entirely in x86 (and / or 68k) assembly.

Nowadays there's just no need... until you run out of CPU power. At that point you think of how much more you could get out of the hardware if the 30+ services running on that box were as optimized as they were 10 years ago... :P


RE: Good News
By TomZ on 9/18/2007 10:46:35 PM , Rating: 5
First of all, let me state that I've been developing software for about the past 20 years, professionally for over 15 years. I've done tons of programming in assembly, lots of intricate optimization work, like you and Justin are talking about. When processors were slow (e.g., 1MHz), that kind of optimization was necessary, because it was the difference between usable and unusable software.

In another domain where I've worked a lot, embedded systems programming, all work was done in assembly. This was because these are hard real-time systems (e.g., powertrain control), so again, performance was key.

But the drawback of all that is what? That everything you wanted to do takes absolutely forever. Each little bit of functionality you wanted to add you your code takes so long that you practically would do anything possible from adding features to the code. And once your code is optimized, forget about making any changes again. We used to say that writing assembly code like that is like pouring concrete - you can work with it initially, but once it dries it is brittle and you can't change it in any way except to tear it all out and start again. Why is that? Because the logic is so intricate and subtle that there is no hope of any programmer ever being able to figure out how the code works without having to spend a crazy amount of time - it's faster to just start again.

So enter high-level languages. At first compilers sucked, so you saw them gain more traction on PCs, and embedded systems followed. But compilers got better, and what happened with high-level languages was that all of a sudden programmers were able to get functionality done much faster. In addition, it became easier to write code that was better structured and could therefore be more easily changed (maintained). But all these gains...at what cost? Well "efficiency" of course. Because the hand-optimized assembly code was still faster than the compilers.

Next, a few things happened over some period of time. First, compilers got better - a LOT better - they got to the point where they can produce code that is better optimized than all but the most experienced assembly language developer, like better than 99% of programmers could ever do. This happened because these sorts of experts got put to work writing the compilers, of course!

Another thing that happened was that the hardware machine - the processor - became much more complex. For example, look at the instruction set and registers of the 6502 and compare that to the architecture of today's X86-compatibles - the complexity is at least 100X now. So now good luck finding software engineers that have a good handle - again the experts on the instruction set architecture are writing the optimizing compilers. These are the guys that know the processor the best and know how to wring each cycle of performance out.

Finally, the third thing that happened is that processors got fast - damn fast. That part speaks for itself, in terms of being able to further facilitate the use of high-level languages, highly-productive programmers, and writing code that focuses not just on performance but on maintainability. These are important attributes for nearly all software.

Because of all this, assembly language has gone the way of the dinosaur - thank God for that! The days of dismal productivity are gone, and now we are in a period of time when it is quick and easy to create software functionality. Embedded systems, even the most high performance ones, are mostly programmed in C, with many even programmed in C++. That has led to a revolution in embedded systems, and has even began to facilitate a new phase - auto-code generation based on behavioral models. On the PC, we've "spent the wealth" in terms of creating highly productive highly secure development environments (e.g., .NET and Java) that let you write great software quickly. Software like Vista has deep support for security, again spending the wealth provided by the fast hardware. PC apps are written at a high level of abstraction, without the programmer getting burdened down in rote programming chores like optimizing data structures.

So how does all this relate to the idea of bloatware? Would you say bloatware is when you have a highly-productive programmer that is able to quickly churn out high-quality functionality since he/she can work at the higher level of abstraction? You may call that bloatware, but I call that smart, profitable, and liberating.

While I agree that certain code still needs a high degree of performance (e.g., FPS game engines), you have to realize that these comprise just a tiny fraction of all the software being written today. For the bulk of software, being able to write high-quality software quickly and get that out to users is 100X more compelling than having each CPU clock cycle efficiently used.

Finally, I would say that what I am describing is not writing sloppy code. Sloppy, inefficient code does happen, but it is the exception, not the norm. No programmer sets out to write sloppy code. This kind of code happens not because of "fast hardware abuse," but because of a number of factors including management incompetence, failure to execute a performant design, changing requirements, or a lack of time to test and improve the performance of the code. But again, these problems are the exception, not the norm. We all have personal horror stories, but the majority of code is adequately well-written.

Sorry this got so long, but I hope you guys can make some sense of it all, and that it answers your questions about why we no longer write hand-tuned assembly code.


RE: Good News
By Phynaz on 9/19/2007 12:17:04 AM , Rating: 2
Very well written Tom.

But I will still disagree. 90% of the so called programmers out there are utter and complete hacks. Their code is absolute complete slop. It's covered up by the amount of machine power we have today.

We have so much cheap cpu power available even slop runs pretty good. Hell, it's usually cheaper to throw more hardware at a performance problem than teach somebody how to create clean code in the first place.


RE: Good News
By Justin Case on 9/19/2007 3:12:55 AM , Rating: 3
The programmers aren't necessarily bad, but software is driven more and more by marketing and release cycles. Since software companies just want to churn out "upgrades" and most consumers don't even understand that the software quality influences the speed of the system, you end up in a situation where good coding isn't rewarded (let alone mandatory), and even good coders end up creating something that is just "good enough", because they're under pressure to have it ready for Christmas (or whatever).

There are exceptions to this, in highly competitive areas like DBs, 3D rendering, and so on (then again, vendor lock-in and cost of switching does wonders), but "consumer-level" software is becoming a bit like fast food.

Ask Joseph Average what influences the speed of his computer and he'll probably say "The GHz and the RAM" (or, if he's really cultured, "The GHz, the RAM and the GPU"). Most people don't understand that different software can achieve exactly the same thing at different speeds, running on the same system. Software makers are fine with that (the less the client knows, the less demanding he'll be), and so are hardware manufacturers (your system is slow? buy a new one!).

I wonder what would happen if there was a wordwide shortage of GHz. :P


RE: Good News
By TomZ on 9/19/2007 8:55:19 AM , Rating: 2
quote:
But I will still disagree. 90% of the so called programmers out there are utter and complete hacks. Their code is absolute complete slop. It's covered up by the amount of machine power we have today.

Obviously you're entitled to your opinion, but I know a lot of professional software devs, and I don't see this type of code in my experience. But YMMV.


RE: Good News
By Justin Case on 9/19/2007 12:24:19 PM , Rating: 2
Just to add that the situation is in some ways similar to web coding.

On one end you have the geeks coding their own sites, and some of them (who don't even have a formal "education" in HTML / PHP / CSS / etc.) take a lot of pride in that and will create very elegant, efficient and 100% standards-compliant code.

On the other end you have a handful of mega-sites that absolutely must work everywhere, must be fast enough to serve millions of users, must be easy to update, etc.. Those are usually done by professionals and are also well coded (especially the backend / security / reliability aspects) and mostly standards-compliant.

And then you have everything between those extremes (including a lot of smaller corporate sites), which may or may not be done by "professionals" (meaning they make websites for a living) who may or may not have formal training, but who really don't give a crap about standards-compliance, code management or even functionality. These are the guys doing entire sites in Flash, using HTML frames and non-standard attributes instead of CSS, and so on. Since they're not working on "their" site, and since the client is typically clueless, all they care about is delivering it ASAP and moving on.

Some of the coders in this last group might actually be pretty good (maybe they even have great personal websites), but the "fast food" market segment is just not structured to reward quality, so if they invest the time and effort to make something better than barely "good enough", they'll be replaced by some guy in India (or in the US, for that matter) that can do "good enough" in less time or for less money.


RE: Good News
By TomZ on 9/19/2007 12:39:14 PM , Rating: 2
I would agree with you on that. And I would add that a lot of sites are built off of HTML and/or Flash templates as well, which doesn't exactly add to "code quality."

But on the other hand, why invest in "extra quality" if it is not needed. For example, if a small company re-writes their entire web site every 2-3 years anyway (as many do), then maybe putting something together quickly is a good strategy?


RE: Good News
By Justin Case on 9/19/2007 1:46:30 PM , Rating: 2
Why bother to get out of the oceans when life as a tadpole was perfectly alright? Some people are driven to push the envelope, I guess. Some people care more about long-term progress than about making a quick buck.

And the notion that a quick buck is always preferable (even in capitalistic terms) is simply wrong. Look at Valve software. They work on each game for 5 years, while giants like Electronic Arts churn them out sausage-factory style. And yet Valve's employees probably make a lot more money than EA's, on average, and the company now has valuable capital both in terms of staff and public perception. The quality of what you do has an effect on who you are and how you're perceived.

Standards-compliant sites are more likely to be correctly indexed by search engines. Sites that were designed with updates in mind from the start are less likely to lead to broken links and dead bookmarks, which means more people will access them in the long run. Just because people are greedy and lack long-term insight, that doesn't mean the answer to "why bother?" is "no reason". There are frequently very good reasons to bother, people just choose to ignore them because they're lazy and can't see any short-term rewards.

P.S. - Using HTML templates is actually a good thing in most cases; at least there's a chance that the original template was made by someone with a clue. If someone is using a template, chances are the alternative would detract from code quality.


RE: Good News
By TomZ on 9/19/2007 1:53:55 PM , Rating: 2
I think your argument assumes that companies have unlimited resources. Sometimes there's not enough time available to do a bang-up job on everything, and so you focus on the things that are most important.

Businesses, like people, these days are bombarded by far more demands and opportunities than they can possibly handle. Therefore it is necessary to prioritize and spend your resources where they can bring the most benefit.

If a guy putting up a web site has lots of time available, then I sure would expect him to do a great job. If instead, putting up the web site is a "side job" from his normal duties and his boss could only allocate a limited amout of time for the project, then he's going to have to cut some corners. That's practical reality.


RE: Good News
By Justin Case on 9/19/2007 2:36:15 PM , Rating: 2
I never said mediocrity wasn't dominant. Just that I'm not a fan of it, and that there are frequently long-term benefits from rising above it.


RE: Good News
By FITCamaro on 9/19/2007 10:55:39 AM , Rating: 2
Well said. I completely agree on .NET. Is it the most optimized and efficient code in the world? No. But when you can easily almost drag and drop your program together in minutes, you can't beat productivity like that.

Sure you might need some more RAM and CPU power to run it than say a C++ program that was completely hand written. But if the .NET guy can get it done in a week and the C++ guy takes 6 months because he has to hand write everything, which is better? Especially if this is just a tool written for internal use by a company.


RE: Good News
By FITCamaro on 9/19/2007 11:22:52 AM , Rating: 1
Also, the above being said, I do not think anyone should ever be taught to program with .NET. You learn nothing about actual programming with it. You need to be taught the basics with the slightly less abstracted high level languages (C++ or Java) before you should be using a language like .NET which abstracts nearly everything.

In college one of my funnest classes was x86 Assembly because we got to get down and dirty with the hardware. Sure it was a pain in the ass to program with it. But it was fun. The only thing that sucked (as Tom previous said) was that it was nearly impossible to change the program once it was working.


RE: Good News
By TomZ on 9/19/2007 12:08:23 PM , Rating: 2
I have to disagree with you a little there. I think there's no difference in learning C# or VB compared to C++ or Java. These languages are all about the same, more or less.

For example, suppose the task at hand is to learn how to write a stack. The code for a stack will be pretty much the same in all these four languages in terms of structure, only the syntax will differ slightly between them.


RE: Good News
By Frallan on 9/20/2007 5:28:11 AM , Rating: 1
Well Tom

I agree with you from the producers stand point. Thank God that the optimization is no longer necessary. It took forever and it was expensive. However from a Productivity point of view with a larger scope I belive (IM-very-HO)you are wrong. What has happened is that we use more and more resources to accomplish the same thing (evolved). The issue is that when the code is not optimized from the start and it spreads over a couple of billions of users all the inefficiancies are mulitplied by the number of users (Windows for example).

Now from the producers point of view this is ok
From the 1 or few user view this is ok
with a holistic view this is absolutley sick

We create a world where we have to manufactor more to satisfy the ever increasing demand that we ourselfs has created. Paying for it in time and scars natural resources.

That doesn't mean you are wrong but there are diffrent perspectives to concider.

My 0.02€
/Fredrik


RE: Good News
By TomZ on 9/20/2007 8:23:58 AM , Rating: 2
Hi Fredrik,

I see your point, however, one thing that is also changing is that computing power is also becoming "per unit" cheaper and cheaper. In other words, computers now cost the same (or less) to own and operate as they have over this time period, and they continue to become much more powerful. Therefore the net cost of this productivity gain is pretty negligible, right?


RE: Good News
By Captain Orgazmo on 9/18/2007 9:27:50 PM , Rating: 2
I thought Pentium M was an evolution of the old Pentium Pro architecture, and that PII, PIII, and PIV developed in a separate evolutionary line.


RE: Good News
By wrong on 9/19/2007 5:54:49 AM , Rating: 3
This turns out not to be the case. The Pentium Pro is granddaddy to them all and the PII and PIII are direct descendants of it. The Pentium M is widely regarded to be a PIII descendant with a small handful of P4 features. So, yes, it is a PPro descendant, but PII and PIII are in the same line.

According to the references I can find, the P4 was a new design, and therefore only loosely related to the PPro, PII and PIII.


RE: Good News
By Justin Case on 9/19/2007 5:26:55 PM , Rating: 2
That's right, wrong. ;)

The Pentium-M is indeed a direct descendant of the Pentium-III, its design is completely different from the Pentium-4.

I can't really think of any "features" shared by the two, though, considering how different they are. I can understand some confusion because Intel decided to call the (barely) mobile version of the P4 the "P4-M". But a 1.6 GHz Pentium-M performed better than a 2.4 GHz P4-M.

Core, in turn, is a descendant of the Pentium-M (Yonah was basically two tweaked P-M dies on one package).

Finally, Core2 is basically Core with AMD64 (er, I mean, x86-64, I mean EM64T, I mean, Intel64) support and some further tweaking (adding virtualization, etc.).

I'm sure Intel used some of what it learned with the P4 when designing Core (namely what not to do :), but Core is still considered as a member of the P6 family (Pentium Pro -> PII -> PIII -> PM -> Core). Core 2 is generally not considered as part of that family due to the new (64-bit) ISA and consequent design changes, but it's still a descendant from Core.


RE: Good News
By Assimilator87 on 9/18/2007 3:02:29 PM , Rating: 2
I don't see why a company would stop innovating just because there's a lack of competition. These people should be passionate about what they do. In my opinion, Intel hasn't slowed down at all since they released Core 2 eventhough there hasn't been a competing solution in all that time. They've done massive price cuts and are on a really speedy schedule with their future chips. I don't think Intel wants to leave any room for a possible overtake by AMD.


RE: Good News
By TomZ on 9/18/2007 5:11:58 PM , Rating: 2
quote:
I don't see why a company would stop innovating just because there's a lack of competition.

It's about cost - a high amount of innovation typically incurs a high cost. If you are going to get the market anyway, why invest the money when you don't have to?

Oh yes, I agree there should be passion, etc., but businesses don't run on passion; they run on money. Or a passion for money, or something like that.


RE: Good News
By Ratwar on 9/18/2007 7:01:00 PM , Rating: 2
While that is definitely true for most industries, I don't think it is all that important for the technology industry. Both Intel and AMD need consumers to keep buying processors in order to be profitable. If they didn't improve their processors, we'd all still be running Pentium Pros and most of us wouldn't have bought a new processor in the last 5 years. The micro-processor industry is like the car industry, they depend on consumers buying a new product before the old one is truly worn out.

Now, I will give you that we wouldn't advance as quickly, but we would still advance.


RE: Good News
By TomZ on 9/18/2007 7:52:33 PM , Rating: 2
quote:
Now, I will give you that we wouldn't advance as quickly, but we would still advance.

I agree, and I didn't mean to say that progress would stop, just that it is faster when fierce competition forces agressive R&D schedules and investments.


RE: Good News
By Justin Case on 9/19/2007 5:44:03 PM , Rating: 2
> While that is definitely true for most industries,
> I don't think it is all that important for the
> technology industry.


What you are saying applies to all industries (as your example with cars shows; plus the fact that CPUs generally won't "wear out" at all).

Without competition, Intel's new CPUs would essentially be competing against its own (older) chips. Now let's say they made a huge breakthrough and managed to go from 2 GHz to 10 GHz. They would simply slow down research and spend the next decade releasing models that were 500 MHz faster than the previous one. Why let your clients upgrade once when you can make them upgrade 10 or 20 times?

But their mistake was to think they could keep AMD down with the power of marketing alone. They probably didn't count on K8 being as good as it turned out, and they (the marketing and management guys, at least) were hoping they could push the GHz madness on forever. The engineers knew that wasn't possible, but apparently Intel forgot it was a technology company.

In the long run, it was a victory for consumers, because it allowed AMD the time to grow and get a foothold in some new markets (in a perfect world the market share of Intel and AMD would be 50-50, or 33-33-33 with Transmeta, for example), which increases competition, lowers prices and speeds up innovation.

Also, the hardware industry relies a lot on "heavier" software being released (ex., by Microsoft), to force people to upgrade. A 100 MHz 486 would still make a perfect "office" computer today, but not when you load it with Vista.


RE: Good News
By Justin Case on 9/18/2007 9:27:16 PM , Rating: 3
> I don't see why a company would stop innovating
> just because there's a lack of competition.


In one word: shareholders. If a company is spending more on R&D than what it needs to maximize its profits, then it's not running its business in the shareholders' best interests.

These people are indeed passionate about what they make, and the job of Intel's directors is to make money. The engineers just (try to) do what their told (frequently against their better judgement).


RE: Good News
By Treckin on 9/18/2007 9:59:12 PM , Rating: 2
That 'possible overtake by AMD' would be called competition...
Jesus fucking christ that post was contradictory... I had to read it twice to make sure...


XP Demoed
By AmberClad on 9/18/2007 1:42:53 PM , Rating: 2
Any particular reason they chose to demo XP instead of Vista?




RE: XP Demoed
By Master Kenobi (blog) on 9/18/2007 1:49:19 PM , Rating: 2
Probably because XP is still the standard (The Present) where as Vista is the up and coming standard (The Future).


RE: XP Demoed
By MonkeyPaw on 9/18/2007 2:39:47 PM , Rating: 3
That doesn't really make sense since Core2 is present and Nehalem is the future. Why run your future CPU on yesterday's OS?

It could be that Nehalem just couldn't run Vista well enough for Intel to confidently demo it. XP and OSX are much lighter duty and more established than Vista. It's just like with OCing--getting a system to post doesn't mean it will boot to windows, and booting to windows doesn't mean it will handle 100% load. I'm not taking a shot at the CPU or Intel either, as it's nice to see an early sample running an operating system already.


RE: XP Demoed
By melgross on 9/18/07, Rating: 0
RE: XP Demoed
By MonkeyPaw on 9/18/2007 8:16:00 PM , Rating: 2
No, it's not a put-down. OSX has been around since the sub-1.0ghz G4 days. Sure, Apple has improved the GUI over time, but any recently produced x86 CPU should be absolute overkill. Also, since OSX has been around a long time, it has a very stable and tested codebase. Vista is simply more demanding of hardware than XP or OSX--if you see that as an insult or as a good thing, well, that's up to you.


RE: XP Demoed
By jeromekwok on 9/18/2007 10:17:38 PM , Rating: 2
Apparently Intel has not ported the CPU validation tools to Vista.


RE: XP Demoed
By TomZ on 9/18/2007 2:46:27 PM , Rating: 2
Here, I corrected that for you: XP is the past and Vista is the present and future.

(At least for many of us early-adopter types.)

Yeah, I know corporate America is just upgrading to XP now... I'm just having a little fun.


RE: XP Demoed
By colonelclaw on 9/19/2007 3:58:45 AM , Rating: 1
maybe you can answer me a question about Vista then?
Why should an OS have such huge system demands? i work as a 3D animator and we require every last drop of CPU and Memory power to be used solely by our modelling app. Vista appears to me to hog most of these for itself - therefore i would never consider using it (we use XP64 exclusively on 30 machines)

it seems a bit strange to me that any OS should have such high hardware demands, personally i just want an OS to run my application and do as little else as possible


RE: XP Demoed
By TomZ on 9/19/2007 8:48:57 AM , Rating: 2
The only area where I find that Vista requires more resources is in memory. I would recommend 1.5-2X the memory compared to XP. Vista is tuned to deliver better performance by using more memory, e.g., for caching. I haven't noticed Vista requiring any more CPU than XP, nor have I ever noticed Vista running apps any slower than XP.

One thing to consider about OS - a lot of the functionality that your app performs is handled by the OS. For example, requests for memory, draw to the display, open a file - these require a lot of OS usage.


RE: XP Demoed
By FITCamaro on 9/19/2007 11:14:49 AM , Rating: 2
Exactly. Vista needs memory, not CPU. OSX though also likes to cache things. I mean you look at these hand held PCs with 1GHz dual cores and they seem to be running Vista fine. Granted they only have about 1GB of RAM so they're still on the edge of acceptable hardware. But its a hand held computer system running the latest OS. Its not going to be the fastest thing in the world.


RE: XP Demoed
By gescom on 9/19/2007 2:49:51 PM , Rating: 2
30x windows xp for 3d software? why not redhat/fedora? weird.


RE: XP Demoed
By SavagePotato on 9/19/2007 5:11:19 PM , Rating: 1
Sounds like the standard misinformed opinion on Vista. The OS requires more ram for all the bells and whistles such as the transparent aero interface.

What people don't seem to realize is that aero does not have to be turned on. Nor does UAC, nor is Vista using up any more of your cpu than XP is. In fact Vista just by itself runs faster and the interface is more efficient with less overhead than XP.

The fact that all of the software on the market is written and optimized for windows XP means that it runs faster on XP. This doesn't mean that vista is slow, it means vista is new. XP was in the same boat and has nearly 6 years of maturing and optimization under it's belt to become the stable OS it is today.

People whined and sniveled just as hard about how "bloated" xp was over 98 to no end. Now those same people love XP and are doing the same to vista.

Open your eyes to progress I say.


RE: XP Demoed
By TomZ on 9/19/2007 5:20:44 PM , Rating: 2
I agree with your theme, but I don't agree that most software is optimized for Windows XP. I can't think of how an app developer would do this, because the APIs that you program against are the same XP and Vista, with the exception of new stuff in Vista.


RE: XP Demoed
By SavagePotato on 9/20/2007 9:57:17 AM , Rating: 1
Well, I would wager that 3d apps like maya, or other high end programs that run at like 10% of what they run at in XP aren't doing it because vista is just so darn slow.

I am not a programmer, but I do have a good memory. I do remember the birth of XP when apps performed slower in XP than they did in 98 which became the rallying cry of the XP detractor. It wasn't long before apps in XP outperformed 98.


RE: XP Demoed
By TomZ on 9/20/2007 11:45:20 AM , Rating: 2
Between 98 and XP was a huge difference - it was basically a different operating system since XP was based off the NT kernel and had much more process and hardware protection than 98.

I would guess if you are seeing apps with a 10X performance difference between XP and Vista, that would be an indication of something wrong, e.g., a compatibility issue.


RE: XP Demoed
By SavagePotato on 9/20/2007 12:03:46 PM , Rating: 1
Which is exactly what I was referring to. Current apps are written for, and tested in XP. Therefore the performance disparity is not that vista is slower but that the software is not performing properly in vista.


RE: XP Demoed
By TomZ on 9/20/2007 12:18:24 PM , Rating: 2
I see, we're just understanding the terminology differently. When you said "optimized" I figured you meant that they spent extra time making it run fast on XP, but you actually meant "compatibility tested." In other words, the app in question was compatibility tested against XP but not Vista, and apparently there are some issues running on Vista. Sounds like we're on the same page!


RE: XP Demoed
By Chocobollz on 9/23/2007 9:12:12 AM , Rating: 2
Well, that's depend on where the progress will headed for. If it's headed to bad things, then I'll not go into it, if it is for good things, then I'll surely go for it. Critizm is surely needed to make things better so please don't judge peoples as misinformed or whining, thank you. xD~


RE: XP Demoed
By TomZ on 9/18/2007 2:14:54 PM , Rating: 2
Probably because they didn't have a Vista driver available.

(Kidding!)


RE: XP Demoed
By KristopherKubicki (blog) on 9/18/2007 3:15:36 PM , Rating: 2
Or, more likely, because it doesn't work on Vista. :)


RE: XP Demoed
By melgross on 9/18/07, Rating: 0
RE: XP Demoed
By TomZ on 9/18/2007 8:07:46 PM , Rating: 2
Most things work with Vista.


RE: XP Demoed
By GeorgeOrwell on 9/18/2007 8:35:06 PM , Rating: 1
XP uploads less information to Microsoft vs. Vista.


RE: XP Demoed
By TomZ on 9/18/2007 8:57:36 PM , Rating: 2
Yea, that is one bad thing about XP. Vista is cool how it checks for problems proactively and can query Microsoft servers for relevant software updates that might solve problems. But I suppose XP lovers wouldn't have any need for new features like that, since XP is the most perfect operating system ever released.


RE: XP Demoed
By Treckin on 9/18/2007 10:04:07 PM , Rating: 2
ha. That was pretty good.

I find more often than not that people bashing vista have never used it...

Its far better than XP, and I didnt and wouldn't believe it until my new laptop came with home premium. I was planning on wiping it and installing XP, but after fucking with it for only 2-3 hours, I fell in love.

I fucking HATE using xp now... it seems so archaic and worthless, almost like going back to Win98 (ok, not THAT worthless)


RE: XP Demoed
By jay2o01 on 9/18/2007 10:30:02 PM , Rating: 1
perhaps the CPU experience index score in vista was lower than 5.9? Maybe intel was just embarrased...


RE: XP Demoed
By Bluestealth on 9/19/2007 1:50:50 AM , Rating: 1
Ummm... I use Linux, XP, and Vista 64 bit daily... and I am growing to hate Vista.

Its SLOW! especially on my laptop... time to sleep is too long, time to wake up is too long, time to login after a restart is too long... the login one really gets to me... XP and Linux snap me to login after booting in several seconds.

There are also some small things, like Vista sleeping after I tell my system to shutdown and close the lid, which causes a rather time wasting issue.

Having to restart for OS updates... annoying... having ATI's drivers ask me to restart... infuriating... configuring updates, need I say more?

I don't find XP to be archaic in comparison to Vista, mostly because Vista is a stones-through ahead of XP. Or maybe it is just that I don't use XP's stock cartoon-like theme?

You know when I restart linux on the same laptop?...
When AMD's shitty driver causes X to have an unrecoverable crash... although thats not to say that X itself isn't crappy either :P

Since AMD is releasing documentation and making their Linux driver support AIGLX things are looking up. It also seems work is underway to drag X into the modern age.

For the record I am using a HP 6910p(RM231UT) laptop with 64 bit linux and Vista, and a C2D Desktop with 32 bit XP and 64 bit linux.


RE: XP Demoed
By Master Kenobi (blog) on 9/19/2007 7:27:56 AM , Rating: 2
Have to say I don't have that problem with my 64-bit vista desktop and 32-bit vista laptop. Just what did you do to your poor vista system?

OS Abuse, call social services!!!


RE: XP Demoed
By Bluestealth on 9/19/2007 4:19:23 PM , Rating: 2
Actually I took out everything I didn't use with vLite, only reinstalled drivers + HPs Security software.

It has Switcher(Flip3D replacement), 7zip, Mozilla Firefox, Thunderbird, PowerDVD7, Microsoft Office 2003, Adobe Photoshop CS2, Adobe Reader 8, Java SDK and Nero6.

There is almost nothing on my Vista partition of my laptop.
I agree with the other poster that it may be driver issues. The harddrive is a Seagate Momentus 7200.1 which comes with the laptop.
Maybe most people have 2 Gigs of ram in their laptop?


RE: XP Demoed
By TomZ on 9/19/2007 11:45:07 AM , Rating: 2
I would guess you are having some driver problems with your laptop. I have a pretty new laptop - Inspiron 1721 - and Vista sleeps and resumes very quickly - like maybe 3-5 seconds.

Yes, restarts are annoying, but that's not exactly a Vista thing. The issue is that you are updating code that is running live in the OS. I don't know how that can be avoided, and that is probably the downside of using Windows Update.


RE: XP Demoed
By darkpaw on 9/19/2007 3:26:00 PM , Rating: 2
I think the start-up/shut-down times are directly related to the HDD speed. I've been running Vista since beta 1 or so on my custom systems, but just recently got a new laptop that had a Vista preinstall. The laptop only had a 5400 rpm drive though (it was free, so couldn't complain) and its a damn dog on boot and resume.

I know the first change I'll be making on that laptop is an upgrade to a 7200 RPM drive. My previous laptop even managed to boot Vista faster with half the memory and a much slower proc.


RE: XP Demoed
By TomZ on 9/19/2007 3:31:17 PM , Rating: 2
My Inspiron only has a 5400RPM drive, so I would think that is not the only factor.


RE: XP Demoed
By SavagePotato on 9/20/07, Rating: 0
RE: XP Demoed
By Chocobollz on 9/23/2007 9:50:09 AM , Rating: 2
I'm pretty sure you'll take back what you're saying if you got a pretty old and oftenly judged as craps etc. Pentium 4 PC like me. Remember that not all peoples in this world afford to have a laptop with C4D processor + BRD-Wraither + ATI(God)Daamnit Radeon Mobility X1900 XTXXX Extreme Edition Volleyball Uncensored like yours.

If you want to know why those peoples bashing on Vista, please buy a cheaper computers (or borrow some from your friends) and install it there. Then you'll start to thinking like those people you're referring as bashing.

Cheers. ^ ^;


RE: XP Demoed
By SavagePotato on 9/24/2007 2:50:10 PM , Rating: 2
Why are you installing vista if you have a system that can't handle it? You aren't going to be running directx 10 games, you don't have more than 3 gigs of ram. Wheres the mandate that you have to run vista on an aging computer.

The same thing would happen if you tried to upgrade a win98 dinosaur to XP.

The computer im sitting at right now cost just over $500, it's one of the computers at work.

Onboard video, 1 gig of ram, athlon64 3500+ single core. It's running vista basic and it's running it fabulously. That is a low low low end computer by the standards of anything you will buy today.

If you want to install a brand new OS on a 5 year old paperweight, honestly why waste your money. If you consider a $500 computer to be too "big bucks" for you, then I'm at a loss for what to tell you.


RE: XP Demoed
By Frallan on 9/20/2007 7:31:46 AM , Rating: 2
Bad driver support? ;0)


Taped-out?
By troublesome08 on 9/18/2007 1:39:59 PM , Rating: 2
Sorry for being a n00b but what the hell does taped out mean? Can someone explain? thanks




RE: Taped-out?
By jmn2519 on 9/18/2007 1:46:43 PM , Rating: 5
I think the saying goes back to the good old days when processor designs were saved off to tape and then sent to the factory floor. Basically what taped out means is that the design is done and they can start spinning samples.


RE: Taped-out?
By troublesome08 on 9/19/2007 1:52:00 AM , Rating: 2
thanks, i kind of guessed as much, but i was curious about the specific 'taped-out' reference


RE: Taped-out?
By theapparition on 9/19/2007 12:46:08 PM , Rating: 2
Completely wrong. See my other response below.

For further proof, "tape out" has been in use far longer in the industry than magnetic tape has even existed.


RE: Taped-out?
By Master Kenobi (blog) on 9/18/2007 1:47:21 PM , Rating: 4
In summary, its when the design has been finalized and sent to the fab to manfacture a dry run (AKA Working Prototype). The Working Prototype may or may not work as intended but in this case it appears Intel has nailed it and it is working as it should, enough that they can start to gearing up the fabs to start production of these processors in the very near future.

Detailed Description stolen from Wikipedia.
quote:
In electronics, tape-out is the name of the final stage of the design of an integrated circuit such as a microprocessor, the point at which the description of a circuit is sent for manufacture. A modern IC has to go through a long and complex design process before it is ready for tape-out. Many of the steps along the way utilize software tools collectively known as electronic design automation. Tape-out is usually a cause for celebration by everyone who worked on the project, followed by eager anticipation of an actual product returning from the manufacturing facility.


RE: Taped-out?
By Roy2001 on 9/18/2007 2:39:00 PM , Rating: 2
Tapeout is from good old days. When you finish the design, you need to put data on a data tape and send it to foundry to let them make chip for you. As internet is not available or speed is far from enough. Nowadays you can ftp layout data to foundry, but it is still called tapeout.


RE: Taped-out?
By theapparition on 9/19/2007 12:44:10 PM , Rating: 2
Wrong, Wrong, Wrong........the only thing you got right is that is from the "good old days". It comes from the fact that designs were initially laid out with adhesive backed tape. Yes, tape. Not scotch brand, of course, it was specialized, and it was more like black pin-striping.

Back in the day, there were no cad tools. Everything was done by hand. PC boards, IC, etc all had to be laid out by hand. Obviously, when something gets very small, it becomes impossible to do by hand. When something was "taped out" it meant someone (or teams) actually took black tape and laid out the circuit traces on (usually) mylar film. For complex designs like IC's, this mylar base could have been as large as 50'x50' (yes, that is feet). A high resolution camera would then photograph the pattern, and from there, the image would be shrunk down and phototemplates made for IC manufacture. PC boards were done this way as well, although most boards could be done on a table, still using the black tape. From there, photoplots were made to be used in the manufacture. Gone are the days when you could correct a photoplot by cutting out a section, or using a permanant marker to connect traces!

So, the term really came from laying down adhesive tape. As an asside, you can usually tell old boards that have been taped, they usually dont have angles, rather, the traces curve around. Take a look at some TV's from the 60's to see what I'm talking about.


RE: Taped-out?
By subhajit on 9/18/2007 1:51:46 PM , Rating: 3
RE: Taped-out?
By Oregonian2 on 9/18/2007 1:52:20 PM , Rating: 2
Kinda the IC version of "Making Gerbers" for circuit board design.


RE: Taped-out?
By FITCamaro on 9/19/2007 11:18:04 AM , Rating: 2
They tape the biggest geek to the flag pole in celebration of getting it working. Considering the field, they have a very big and wide flag pole....


Nice
By munky on 9/18/2007 1:06:52 PM , Rating: 2
This is the processor I'm really waiting for, without the FSB duct-tape interface. I don't know how AMD can compete with Nehalem once it launches.




RE: Nice
By GhandiInstinct on 9/18/07, Rating: 0
RE: Nice
By Master Kenobi (blog) on 9/18/2007 1:12:46 PM , Rating: 2
They were the guys that bought ATI and ATI's infamous "paper launch". I'm going to rant a little here and say that AMD seems to have picked up that annoying paper launch habit when it bought ATI. Oh well, maybe Phenom will impress if we see them in the channel before New Years.


RE: Nice
By System48 on 9/18/2007 1:15:39 PM , Rating: 2
I'd have to agree. If the FSB is really what's holding back the 4S+ servers for Intel then Nehalem should be of great concern to AMD. I love what Intel has done now with Nehalem and Penryn, first A1 silicon up and running right out of the gate. AMD's production abilities are almost a joke in comparison.

/sarcasm
Why no 3Dmark06 numbers?


RE: Nice
By cheburashka on 9/18/2007 3:35:38 PM , Rating: 2
First Si is actually called A0.


RE: Nice
By Master Kenobi (blog) on 9/19/2007 7:31:05 AM , Rating: 2
Indeed, in the mathematical world it starts at 0 not 1.


RE: Nice
By rninneman on 9/18/2007 2:41:25 PM , Rating: 2
I know, the "FSB duct-tape" on my OCed C2D is really holding me back from destroying any AMD CPU on any benchmark.

/Sarcasm

I thought FSB issue had been settled long ago. Unless we are talking about 4S+ servers or a few bandwidth intensive server apps, the remaining 99% of the world cannot saturate the FSB.


RE: Nice
By weskurtz0081 on 9/18/2007 5:07:40 PM , Rating: 2
As the core count goes up it might be able to saturate the bus. Who knows, we would have to have a octo core CPU on a bus to see for sure. Or, we could run a Kentsfield on half of it current bus. At any rate.... yeah, they do need it for the server space.


RE: Nice
By JumpingJack on 9/19/2007 1:14:47 AM , Rating: 2
Done that no effect.

DT applications use quite a small portion of the available BW --- look, for exmample at the Kentsfield 1067 vs 1333 Mhz FSB reviews -- no real impact. Or take a look at the AM2 introduction, DDR2 for AMD brought 30% memory BW improvement over 939, yet hardly noticable performance gains.


Gee, that sounds familiar!
By Goty on 9/18/2007 11:51:58 PM , Rating: 2
quote:
"We wanted to build the highest performance per core that could be used in notebooks all the way to high end servers," stated Hinton.


Gosh, that sounds a lot like what that other CPU company has been doing for the past four years!




RE: Gee, that sounds familiar!
By Phynaz on 9/19/2007 12:19:10 AM , Rating: 2
Which other cpu company would that be?


RE: Gee, that sounds familiar!
By JumpingJack on 9/19/2007 1:01:55 AM , Rating: 2
quote:
Gosh, that sounds a lot like what that other CPU company has been doing for the past four years!


Actually, no .... the K8, also known by codename Hammer, was actually designed for server specifically, then moved into desktop.

It was not the optimal design for notebook, as such, AMD has had a diffcult time, until recently, pushing into the notebook space. First they tried a down clocked athlon, sempron and came along with a modified K8 core called the Turion (a worthy product).

Ironically, it was Intel who first attempted to segment their design into the 3 primary markets, each getting their own unique flavor via a different architectural approach... one for mobile, another for desktop and yet a different on for server. When it was clear that Itanium was going to fail in the marketplace, Intel push forward with x86 server ultiamtely adopting AMD's 64-bit implementation methods. This in contrast to AMD who tried to unify around one architecture the K8 -- great for server, and adequate (frankly, performance leading over netburst) for desktop, but not so good for mobile.

Fast forward to today -- the irony -- it is now Intel unifying the architecture, this time from mobile upward into server, and AMD appearing to split their architecture (griffin will be K8 derived and not based on Barcelona which will be DT and server).

So generally, yeah, design the best you can... but the approaches are not similar, and in fact each appear to have swapped places in the 'unified archtecture' philosophy and in the design methodology.


I D F-Sept 18 2007
By crystal clear on 9/18/2007 1:28:30 PM , Rating: 2

INTEL DEVELOPER FORUM, San Francisco, Sept. 18, 2007 – Intel Corporation President and CEO Paul Otellini today outlined new products, chip designs and manufacturing technologies that will enable the company to continue its quickened pace of product and technology leadership.

Speaking to industry leaders, developers and industry watchers at the Intel Developer Forum (IDF), Otellini showed the industry's first working chips built using 32 nanometer (nm) technology, with transistors so small that more than 4 million of them could fit on the period at the end of this sentence. Intel's 32nm process technology is on track to begin production in 2009.



also-

Looking to 2008, Otellini made the first public demonstration of Intel's Nehalem processor and said the company is on track to deliver the new processor design in the second half of the year. The Nehalem architecture will extend Intel's leadership in performance and performance-per-watt benchmarks, and will be the first Intel processor to use the QuickPath Interconnect system architecture. Quickpath will include integrated memory controller technology and improved communication links between system components to significantly improve overall system performance.


http://www.intel.com/pressroom/archive/releases/20...

http://www.intel.com/pressroom/archive/releases/20...




RE: I D F-Sept 18 2007
By crystal clear on 9/18/2007 1:42:59 PM , Rating: 2
Moore's law
By lompocus on 9/20/2007 1:28:53 AM , Rating: 1
Wow, we were just having an article about how moore's law just died, and then we have an article about a several fold increase in 2 years.

Back then, 2 years ago, a X6800 was all the rage, 2 cores of 3 GHz goodness. 2 threads.

2 years later, we have 8 cores, 16 threads, 4 GHz goodness.




RE: Moore's law
By Dactyl on 9/21/2007 3:54:54 PM , Rating: 2
That's not true. The article said Moore's law was doomed, and would stop in about 15 years. It did not say that Moore's law was "dead."

In any event, the extra cores on Nehalem are not due to a process shrink. Nehalem, at 45nm, is on the same process as Penryn. Twice as many cores can be fit onto a similar-size die because Nehalem has less cache. It doesn't need the cache because it has an integrated memory controller. That has nothing to do with Moore's law.


I don't get it
By ElFenix on 9/19/2007 1:56:22 PM , Rating: 2
quote:
Intel's largest architecture overhaul in decades is less than a year away

How is this the largest overhaul? In physical terms in suppose it is, but then every new processor seems to have gobs more transistors than its predecessor (and the 'in decades' part wouldn't be necessary if that were the reference). AFAIK this is yet another processor in the pentium pro line (which really seems to be the ideal architecture for general purpose computing, the K7 isn't really that much different). Am I wrong?




Cool name, same game
By AlphaVirus on 9/19/2007 3:43:18 PM , Rating: 2
Nehalem is such a cool name to be but honestly I would like to see Intel and AMD stop adding more cores and target updates on the cores we currently have.
We keep getting word that the extra cores will sleep/disable when not needed but I have a feeling that most of the cores will be doing this majority of the time.
I think the Bloomfield should only be produced with 2-4 cores considering 8 cores will never be used by your desktop market. The Gainestown chip should be produced with 4-8 cores since it can make use of all available cores.

Overall its a good design but I just get tired of the same thing coming out of such a powerhouse.




Nehale? Nehalem!
By Anonymous Freak on 9/20/2007 12:17:41 AM , Rating: 2
quote:
Intel's Paul Otellini holding up a "Nehale" wafer


"Nehale"?? I hope that was just a typo. Nehalem, like most Intel CPUs, is codenamed after a river on the West Coast of the US, Southwest US, or Israel. (They're three development centers.) Nehalem is the singular and plural name for many things; with the Intel codename coming from a river on the Northern coast of Oregon. (I go camping at the Nehalem Bay State Park that the river runs through once or twice a year.) The river is, in turn, named after a tribe of American Indians, which were also called the Tillamook (which was another Intel code name, and is another city farther South, with another namesake river.)

http://en.wikipedia.org/wiki/Nehalem




Cache Thrashing
By scrapsma54 on 9/20/2007 2:02:39 PM , Rating: 2
if 2 execution threads are enough to take a 32MB chunk out of memory, then what will 16 core do if Intel doesn't initiate a solution to cache thrashing? Nehalem will be awesome, But if you resurrect a technology that was no where near efficient than actual dual processor systems and Improve on it, don't call it Hyper-threading. Call it Hyper-threading 2 or some thing that glorifies it from the predecessor.




lol!
By thartist on 9/21/2007 3:13:33 PM , Rating: 2
"I AM NEHALEM, YOUR WORLD WILL NOW BE MINE! SURRENDER!"




"So if you want to save the planet, feel free to drive your Hummer. Just avoid the drive thru line at McDonalds." -- Michael Asher

Related Articles
Intel Sets Official "Penryn" Launch Date
September 18, 2007, 1:17 PM
Intel 45nm "Penryn" Tape-Out Runs Windows
January 10, 2007, 2:13 AM













botimage
Copyright 2015 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki