Print 68 comment(s) - last by eachus.. on May 6 at 8:33 PM

  (Source: LucasFilm)

Gordon Moore's prediction of doubling transistor counts every 2 years revolutionized the computer industry and his company, Intel.  (Source: New York Times)

An NVIDIA VP is declaring Moore's Law dead and GPUs the only hope for the industry.  (Source: TechCrunch)
In NVIDIA's eye the parallelism of the GPU is the only future for computing

NVIDIA has struggled this time around in the GPU war.  Its first DirectX 11 products were delivered a full seven months after AMD's.  While its new units are at last trickling onto the market and are very powerful, they're also hot, loud, and power hogs.  However, NVIDIA is staking much on the prediction that the computer industry will be ditching traditional architectures and moving towards parallel designs; a movement which it sees its CUDA GPU computing as an ideal solution for.

Intel and NVIDIA have long traded jabs, and Intel's recent failed GPU bid,
Larrabee, does little to warm to the ice.  In a recent op-ed entitled "Life After Moore's Law", published in Forbes, NVIDIA VP Bill Dally attacks the very foundation of Intel's business -- Moore's Law -- declaring it dead.

Moore's Law stemmed from a paper [PDF] published by Gordon Moore 45 years ago this month.  Moore, co-founder of Intel, predicted in the paper that the number of transistors per area on a circuit would double every 2 years (later revised to 18 months).  This prediction was later extend to predict that computing power would roughly double every 18 months, a prediction that became known as Moore's Law.

Now with die shrinks becoming more problematic, NVIDIA is convinced the end is nigh for Moore's Law (and Intel).  Writes Dally:

Moore's paper also contained another prediction that has received far less attention over the years. He projected that the amount of energy consumed by each unit of computing would decrease as the number of transistors increased. This enabled computing performance to scale up while the electrical power consumed remained constant. This power scaling, in addition to transistor scaling, is needed to scale CPU performance.
But in a development that's been largely overlooked, this power scaling has ended. And as a result, the CPU scaling predicted by Moore's Law is now dead. CPU performance no longer doubles every 18 months. And that poses a grave threat to the many industries that rely on the historic growth in computing performance.

Dally says that the only near-term hope for the computer industry now that Moore's Law is "over" is parallel computing -- splitting workloads up among a variety of processors.  However, he derides multi-core efforts by AMD and Intel, stating, "Building a parallel computer by connecting two to 12 conventional CPUs optimized for serial performance, an approach often called multi-core, will not work. This approach is analogous to trying to build an airplane by putting wings on a train. Conventional serial CPUs are simply too heavy (consume too much energy per instruction) to fly on parallel programs and to continue historic scaling of performance."

He concludes, "Let's enable the future of computing to fly--not rumble along on trains with wings."

In other words, he hopes you will buy NVIDIA GPUs and join the "Moore's Law is dead" party.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

By Talon75 on 5/3/2010 10:46:16 AM , Rating: 4
That is a rather bold statement. While it is quite possible that he has a good point, going after Intel and AMD on this takes some brass ones...

RE: Interesting....
By spread on 5/3/2010 10:50:00 AM , Rating: 5
Don't worry. Nvidia's going to open up another can of whoop ass.

RE: Interesting....
By kattanna on 5/3/2010 11:07:31 AM , Rating: 5
but doesnt opening said can of whoop ass on yourself defeat the point?

RE: Interesting....
By gamerk2 on 5/3/2010 11:27:02 AM , Rating: 5
To be fair, the conclusion is probably correct. CPUs are serial process oriented, and are not designed to handle parralell workloads.

What NVIDIA failed to mention is that for heavy serial based tasks, GPU's also fall far short of the mark, as most of the computing resources avaliable end up going to waste. [Hence why Rasterization is done on GPUs: Each pixel's computations are fully independent from the rest, so a massivly parrelel structure makes far more senese].

I see a general trend toward more sepcilized chips in the near future; We're already seeing the trend toward a seperate physics processor (which will be massivly parrallel once multiple-object interactions become the norm, but remains VERY process heavy, making GPU's not the best option for computation...).

RE: Interesting....
By Aenslead on 5/3/10, Rating: -1
RE: Interesting....
By adiposity on 5/3/2010 12:28:34 PM , Rating: 2
Didn't it used to be nVidia? I guess that's just their logo.

RE: Interesting....
By jonmcc33 on 5/3/2010 12:33:03 PM , Rating: 2
It was always nVIDIA. Just look at the dies on their GPU. Lowercase "n" and the rest is capitalized.

RE: Interesting....
By adiposity on 5/3/2010 3:41:27 PM , Rating: 3
I wouldn't say "always" since it's now apparently NVIDIA (just check their website, copyrights, etc.)

RE: Interesting....
By oab on 5/3/2010 12:28:49 PM , Rating: 3
It's nVidia you insensitive clod!

RE: Interesting....
By oab on 5/3/2010 12:29:41 PM , Rating: 2
Yes, I know it is no longer nVidia, because they changed it. Just like NEXT/NeXt/etc.

RE: Interesting....
By deeznuts on 5/3/2010 7:14:35 PM , Rating: 2
You can always tell an nvidian posting by the way they write NVIDIA.

Are nvidians related to the ballchinians?

RE: Interesting....
By zmatt on 5/3/2010 8:04:40 PM , Rating: 3
I prefer the term Nvidiot.

RE: Interesting....
By MrPickins on 5/3/2010 1:37:30 PM , Rating: 2
I see a general trend toward more sepcilized chips in the near future; We're already seeing the trend toward a seperate physics processor (which will be massivly parrallel once multiple-object interactions become the norm, but remains VERY process heavy, making GPU's not the best option for computation...).

It make me wonder if we'll see more chips like the Cell in the future.

RE: Interesting....
By bbomb on 5/3/2010 4:36:06 PM , Rating: 2
LOL That one deserves a six lmao.

RE: Interesting....
By talonvor on 5/5/2010 9:39:20 PM , Rating: 2
You know 10 years from now, when the first quantum CPUs hit the market it wont matter anymore. A quantum CPU the size of a dime would out perform any mainframe on the planet. Problem solved!

Awfully strong words...
By Creig on 5/3/2010 11:00:24 AM , Rating: 5
from a company that's six months late to the party. You can't fly at all if your plane is still in the hangar.

RE: Awfully strong words...
By mmatis on 5/3/2010 11:09:28 AM , Rating: 1
Maybe not. But you can make a lot of money claiming that it hasn't crashed and killed anyone. After all, as David Hannum once said: "There's a sucker born every minute."

RE: Awfully strong words...
By Inkjammer on 5/3/2010 11:15:30 AM , Rating: 5
According to Nvidia, you just mount the wings on the hanger and mount bigger, better heatsinks on the engine. In short: fly the entire damn warehouse to do what the plane COULD do if it was built right the first time.

RE: Awfully strong words...
By ipay on 5/3/2010 1:37:42 PM , Rating: 2

RE: Awfully strong words...
By zmatt on 5/3/2010 2:28:16 PM , Rating: 1
+2 billion

Nvidia has no room to be criticizing anyone on architectures. Their GPUs are big and hot, and their programming environment is thrown together. Not to mention the serious differences on how GPUs are designed and work, they could never replace CPUs. CPUs may be less efficient in floating point calculations, but they are far more versatile, in other words ideal for the job of central processing unit, they can do everything. GPUs have a strict memory hierarchy and a very 1 dimensional (see what I did there) skill set. Some things run very very fast on a GPU, but most don't.

Not to mention how graphics companies are marketing driven and not engineering driven, their claimed performance is never indicative of real world numbers and they can run you in circles to get real answers. Something that the HPC world is not very receptive too.

RE: Awfully strong words...
By Inkjammer on 5/3/2010 3:15:47 PM , Rating: 2
Honestly, I think that Nvidia's 200 series were fantastic, and they held against ATI well. Nvidia needed to refine the 200 series, and eek out a smaller design and more performance. They should have done that and focused on getting Fermi RIGHT, not just getting it out the door. Most graphics card sales are at the $200 level. If Nvidia could have refined the low end with cooler, faster 200 series cards things would have been better all around.

DX11 is not important (yet) and Nvidia should have pushed the performance levels of the GTX 260 as mainstream (screw the 250). The price was right, the performance was right, and the GTX 260 could have/should have been the new 8800 GT while ATI did the same. Get it down to a $150 level, cooler, faster. Promote it was the baseline for gaming performance.

I see that as the failure of both ATI and Nvidia, personally. No standard performance expectations, and both companies keep pushing out crappy derivative cards that don't meet a set a minimum performance standard, which in turn leads to the "omfg, gaming computers cost $2,500!" stereotypes that hurts PC gaming so damn much by perceptions of people tying to keep up. It's always a race for the performance crown... while the baseline suffers. And in this instances, they suffered losses on the top and the bottom.

RE: Awfully strong words...
By zmatt on 5/3/2010 4:49:05 PM , Rating: 2
It would be impossible for them to shrink GT200 or fermi. TSMC is having enough trouble as it is with current gen fab processes and i doubt they can start cutting weight off the die to slim it down. they went with a large monolithic gpu and now they have to live with it. I think ATI saw the writing on the wall a long time ago and began to move towards a better way to make gpus, compared to Nvidia they are smaller and cooler for similar performance and lower prices. Not to say that ATIs chips are perfect, just better.

last gen nvidia was hurt but not beat, this time around they have been schooled.

RE: Awfully strong words...
By Inkjammer on 5/3/2010 3:17:22 PM , Rating: 3
And for the record, I'm not pro ATI. I love Nvidia, but they eff'd up so damn badly this round.

RE: Awfully strong words...
By CrimsonFrost on 5/6/2010 9:45:47 AM , Rating: 2
I logged in just to upvote you. I hold no loyalty to any company with regards to CPUs and GPUs, whichever one performs better is what I buy. (Currently Intel and ATi)

RE: Awfully strong words...
By invidious on 5/3/2010 1:10:30 PM , Rating: 2
Riding the DirectX wave is not much of a party. Software adoption usually lags behind by about a year anyway. NVIDIA isn't missing out on anything specifically due to DX11.

AMD and NVIDIA never release products at exactly the same time. So whenever one of them releases a new flagship they enjoy bragging rights and the super enthusiast market sales for a few months. The only difference this time around is that this is the first time AMD has been on top in several years so they are trying to make a big deal out of it.

RE: Awfully strong words...
By Gungel on 5/3/10, Rating: 0
RE: Awfully strong words...
By tviceman on 5/3/2010 2:47:05 PM , Rating: 2
Cough.. Larrabee... cough...

RE: Awfully strong words...
By Phoque on 5/3/2010 4:24:25 PM , Rating: 2
"six months late to the party" with the most powerful unavailable graphics card ever built.

RE: Awfully strong words...
By Phoque on 5/3/2010 4:27:00 PM , Rating: 2
oups: replace "graphics card" with 'gpu'

RE: Awfully strong words...
By superdoz77 on 5/3/2010 4:28:33 PM , Rating: 2
I wonder how far back nVidia will be behind ATI(AMD) come the next round.

Correct, but...
By Sunday Ironfoot on 5/3/2010 11:02:17 AM , Rating: 2
Yes, what he is saying is technically correct, we need massively parallel architectures such as nVidia GPUs to overcome the limitations of Moore's Law. Even intel recognises this hence their R&D into Larrabee (which hasn't been abandoned BTW).

The problem with massively parallel architectures is that the software has to be explicitly written to take advantage of them. While this is 'relatively' easy with games (3D graphics and visual output operations lend themselves to being easily divided up to run on seperate threads), writting tranditional software such as a Word processor, a Web Browser, or a web application, can be quite tricky as software engineers have gotten used to thinking in terms of entire processors running on a single thread. This is what traditional CPUs tend to do extremely well, run a single thread as quickly and efficiently as possible.

What we'd probably need is a way to abstract parallelism away so that software devs code to a single thread, but behind the scenes it's working across multiple threads.

RE: Correct, but...
By Jaybus on 5/3/2010 2:39:39 PM , Rating: 2
Computer science has been working on extracting parallelism from sequential code for years and years without an answer. It isn't clear if it is even possible. However, that could be a good thing. Translated code is rarely better than native. Now that parallel hardware is common, engineers are becoming more and more comfortable "thinking in parallel". It only seems quite tricky when viewed with a sequential (and imperative) mindset. As multi-threading becomes the norm, it becomes less tricky, and functional programming languages like Haskell even begin to make sense.

RE: Correct, but...
By The0ne on 5/3/2010 3:02:07 PM , Rating: 2
Parallel programming from sequential code(s) has been going on over 15 years now. We were taught a semester on parallel programming but it just never took off so the class ended. The industry was not ready and did not want it at the time.

Having said that, parallel programming is not easy. It's definitely easier said than done. It's still new in the sense that many are not familiar with it, less begin to imagine how it is done. I really hope the interest in parallel programming and REAL multitasking will bring about new ideas and tools. We need them in this day and age; we seriously can't be stuck with today's tool IMO.

RE: Correct, but...
By The0ne on 5/3/2010 3:08:05 PM , Rating: 2
Ah, remember my professor's name. Professor Kavianpor. I'm pretty sure it's spelled different as I'm going off of the pronunciation :)

RE: Correct, but...
By Jaybus on 5/4/2010 2:24:37 PM , Rating: 3
An example in C:
for (i=2; i < n; i++) {
x[i] = x[i-1] * 2;
This cannot be auto-paralellized by any compiler or pre-processor that I'm aware of. It appears at first to be sequential by nature because x[i] is dependent on the previous iteration. However, the same problem can be solved with:
for (i=2; i < n; i++) {
x[i] = x[1] * (2 ** (i-1));
which can be auto-parallelized. The auto-parallelization is not smart enough to find a different solution to the same problem so that it may be parallelized. Thus, it is either up to the programmer to write sequential code that can be auto-parallelized, or it is up to the programmer to write parallel code to begin with. Is one more difficult than the other? I believe it depends on training and mindset.

We can't wait around on some clever computer scientist to solve this sort of AI problem, just as physicists can't wait around for the next Einstein to reconcile quantum mechanics and general relativity. In the meanwhile, it makes more sense to struggle through the learning curve and make the best use of what we have, rather than continuing to train students solely in sequential methods in the hope that someone will soon devise a really smart auto-parallelization tool.

RE: Correct, but...
By eachus on 5/6/2010 8:33:31 PM , Rating: 2
for (i=2; i < n; i++) {
x[i] = x[1] * (2 ** (i-1));

Actually that is worse than the original, even on a parallel machine, since you still have an implicit loop in the power of two. Using shifts, what you really want to do is evaluate:

for (i=2; i < n; i++) {
x[i] = x[1]<<(i-1);

Of course, you really should put an overflow check in there, since it will overflow pretty quickly if x[1] /= 0.

A better (and eminently practical) example is to try to implement the Simplex algorithm for solving linear programming problems efficiently. Even if you have a very good BLAS, such as the goto BLAS, or a version of the ATLAS BLAS tuned to your system, a good deal of optimization in Simplex code concerns itself with representing the basis as a (square) matrix times a series of vectors.* Every so often you need to "regenerate" the basis by inverting a matrix, and how often you need to do that depends on how stiff the current basis is, so you want to keep the determinant of the basis around from every pivot since the last regeneration. (Or some growing function thereof.)

You cannot, unfortunately, combine multiple changes to the basis or do them in parallel. You can, and must extract some of the parallelism from the problem for any serious sized LP problem, but the best way to do that may be a property of the actual LP problem being solved. For example, transportation problems are a special case of LP problems which allow for faster solutions, and oil companies typically have huge models including both transportation and refinery operations. In that case the current solution is used as a starting point for the next iteration.

In other words, the major job of the programmer is not to produce some correct implementation of the Simplex algorithm, but to produce a valid implementation which is well suited to the actual hardware and problem domain. Writing the final code is probably the easiest part of the job. Or you can get "canned" programs, and play with the optimization settings. It is often easier to write the code yourself, since you need to know it almost by heart to do a decent job on the tuning.

* If you are not familiar with linear programming, it is the problem of finding a maximum value of the objective function over a--usually large--set of variables, subject to a set of linear constraints. It was discovered in the 1940's that the optimal solution to a linear programming problem can will always have at most one non-zero variable for each constraint. Eliminating all the other (zero) variables, you now have a simple linear algebra problem to find the correct values for the "free" variables. The problem, of course, is first determining what that set is.

RE: Correct, but...
By gamerk2 on 5/3/2010 4:08:59 PM , Rating: 3
There are three main issues:

1: Threading
2: Multi-processing
3: OS limitations

Threading itself is simple; its only when combined with the next two factors that you get code that doesn't run well in a parallel environment.

Mutli-processing is much tougher, patley because of the underlying OS. In Windows, only one copy of a .DLL exists for every process that runs. [Every process inherits the same Windows.DLL, etc] As such, since every process inherits very low level system DLL's, you have code that at some point will no longer be able to run perfectly parrallel, regardless of how it is coded. [Hence, why I am a proponent of static linking and the return to static DLL's].

You also need to factor in the Windows Scheduler, which tends to keep most processes on one core, simply in order to share some of those repeated low level resources (which in theory, would cut down execution time if the individual tasks were independent of eachother).

Nevermind teh worst thing you can do is start putting lots of threads on lots of cores; taking the GPU as an example, a GPU would be slower in general computing tasks, as each singluar core is far slower then a standard CPU. Its only through parrallelization that GPU's are efficent. [Hence why they do rasterization, which is independent to each indvidual pixel].

Trying to parallelize serial code on an OS that was not designed for a multi-processor environment is a job that is doomed to failure. At some point, M$ is either going to have to re-work Windows with multi-processors in mind, or some other company will have to release a competiting OS.

RE: Correct, but...
By Bcnguy on 5/4/2010 1:01:34 PM , Rating: 2
On the third point. As far as I know MS is working on it at least since 2008 when started a collaborative project for Parallel Computing with the Barcelona Supercomputing Center (BSC-CNS).

RE: Correct, but...
By Targon on 5/3/2010 6:39:57 PM , Rating: 3
There will always be a need for some serial code as well as parallel code, the key is that NVIDIA doesn't have a CPU, so they keep trying to make people thing a GPU can do all the work. Each has its strengths and weaknesses, but this idea that NVIDIA will somehow "save the industry" when they can't even get their own flagship out the door speaks volumes about their ability to understand their own limitations.

By NanoTube1 on 5/3/2010 11:16:50 AM , Rating: 1
The issue is not Moore's law, the issue is the coming shift in personal computing which the iPhone, Android and iPad are the first examples of. I am not saying the PC is dead, I'm just saying that the PC is morphing into mobile and Intel's / AMD's processors architecture is not a viable solutions for such devices.
nVidia is going to the mobile market (ARM based Tegra2 at this stage) because this is where they can become CPU/SoC providers and grow out of the video cards industry.

By aftlizard on 5/3/2010 12:16:29 PM , Rating: 4
I don't see a shift were mobile computing will exceed pc's in importance for enterprise and production, and definitely not gaming. The shift will be more about production where you can work on documents and slides on your mobile device and switch on the fly to your pc for completion. There will be more mobile devices for certain and that sector will continue to grow but I doubt it will pass the importance of PC's.

I have many mobile device and nothing, not even my laptop, can really replace the comfort of using my home PC with its large monitor and processing capability.

By Micronite on 5/3/2010 1:44:28 PM , Rating: 2
I definitely agree...

When you boil it all down, the major draw for mobile devices is the ability to be on the Web.
Maybe I'm wrong, but I don't see many people working on documents with their iPhone or Blackberry, but I see people all the time using their iPhones to browse the web.

On a side note, this makes Apple's stubbornness with Flash even more puzzling. Honestly, I might consider an iPhone if I were able to do anything I can do on the web at my desk.

By NanoTube1 on 5/3/2010 5:20:29 PM , Rating: 2
I think you are missing the point.

Tablets have the potential to replace PCs for the vast majority of people. The iPad and it's relative simplicity for example are the advent of mobile-PC-as-an-appliance, which is something no one could achieve before (for many reasons).

Most people use their PCs for web, skype, office work and some simple gaming - which are all practical on a tablet device. If the majority of these ordinary users switch to iPad like devices, the PC will return to be what it was designed for in the first place - a workstation for relatively complex tasks.

I never believed in the death of the PC and I still don't, but what we are witnessing here is the beginning of a major shift - and nVidia sees it as their opportunity to become a leader in the CPU/GPU market.

Hey Mick
By Phynaz on 5/3/2010 10:50:31 AM , Rating: 2
Who's Goord Moore?

RE: Hey Mick
By AssBall on 5/3/2010 11:14:30 AM , Rating: 2
Its a Mick story, just expect typos.

RE: Hey Mick
By The0ne on 5/3/2010 11:24:17 AM , Rating: 2
hahaha and expect him to ignore them too.

RE: Hey Mick
By Sazar on 5/3/2010 2:38:07 PM , Rating: 2
Hey, he's a self-proclaimed journalist, not a blogger. Journalists don't do spell-checking, editors do :)

What a surprise...
By Motoman on 5/3/2010 10:58:35 AM , Rating: 2
...Nvidia comes out every year or so with some stupefying comment about how the CPU is irrelevant etc. etc.

Why are we still paying attention to them?

RE: What a surprise...
By gralex on 5/3/2010 11:30:45 AM , Rating: 2
RE: What a surprise...
By Motoman on 5/3/2010 12:23:49 PM , Rating: 2
Yeah, I was in the top 1% of all SETI users at one time. I had many computers that I just let run 24/7 pounding out work units. Until I realized I was spending about $100 a month on electricity for the stupid things... any rate, I hardly think that enough people care about distributed computing apps such as SETI to warrant putting up with Nvidia's constant BS. Their mouthpieces are just retarded.

RE: What a surprise...
By The0ne on 5/3/2010 2:56:38 PM , Rating: 2
They didn't use to be, but for at least 3 years now they have. I've visited their office in LA and it was not a enjoyment or a sight to see. I think it's their executive management team that needs a make-over. They are not making the right decisions and are acting more like the financial money money!

Rename it!
By akse on 5/3/2010 1:01:06 PM , Rating: 3
How does nVidia plan to double computing power in 18 months by just renaming their old chip? :)

RE: Rename it!
By Goty on 5/3/2010 1:26:03 PM , Rating: 4
That's why they think the law is dead: they never thought they'd have to put actual work into making marketable products!

By Anoxanmore on 5/3/2010 10:41:31 AM , Rating: 1
looks at her 9800GTX+

I want a DX11 card.. and you have FAILED ME NVIDIA!

RE: Nvidia
By Regected on 5/3/10, Rating: -1
RE: Nvidia
By mcnabney on 5/3/2010 11:56:01 AM , Rating: 2
I am sure the game publishing industry is in big trouble now that nobody is going to buy any new games.

RE: Nvidia
By Kurz on 5/3/2010 11:44:15 AM , Rating: 4
ATI's 5870 aint bad ;)

Not Dead
By tech329 on 5/3/2010 1:00:45 PM , Rating: 2
Moore's law is just in transformation. Parallell computing is a likely advance. But molecular computing is on the horizon along with other advancements in materials. AMD acts like Intel, IBM, MS and others are oblivious to the ways computing might evolve. This is a ridiculous assertion upon the part of AMD. And they look ridiculous in making it.

RE: Not Dead
By Jaybus on 5/3/2010 2:10:18 PM , Rating: 2
There is a problem with parallel cores as well. Already, 40% of the total power is used for transmission of data between cores and off-chip i/o. Each addition of cores increases the ratio. So there is a point of diminishing returns with increasing core counts. Shrinking the die somewhat alleviates the problem, but then we are back to the original problem of minimal die size. We are not there yet, but at some point traces will be too close together to be reliably deterministic due to quantum mechanical effects.

Molecular computing, quantum computing, electron spin states, etc. are fascinating, but an intermediate approach is more likely. The pieces are nearly in place for the use of silicon photonic circuits for intra-chip and inter-chip data transmission. I expect to see hybrid optronic/electronic chips with on-chip optical data transmission long before the more exotic stuff. That should keep Moore's law on track for a bit longer.

Bigger problem
By nafhan on 5/3/2010 11:11:57 AM , Rating: 3
I think a bigger problem, and it applies to Nvidia's GPU's as well, is that you can get more mileage out of an older computer these days. A 5 year old computer, today, is more capable of running up to date software than a 5 year old computer 5 years ago. I think Nvidia needs to stake their future on products like Tegra, not the desktop GPU market. Cheap, small, low power devices that do everything most people need are the future.

By Visual on 5/3/2010 11:38:19 AM , Rating: 2
Moore's law is not even about doubling transistor count. It is about halving the cost. Or, doubling transistor count per cost if you want to put it that way, but not necessarily per chip or per area or anything else to do with ever shrinking transistors.
And it will continue to hold true. Only, monopolies may make it so it applies to manufacturing costs but is not reflected in end-consumer prices for a while, until competition appears and breaks them.

idiot vs idiot
By seraphim1982 on 5/3/2010 11:54:56 AM , Rating: 2
Honestly, Nvidia does have a point, although the solution isn't adding a GPU to everything.

The solution is designing a better chip from the ground up that doesn't have the deficiencies of the others. ie. quantum chip.... although it'll be years before we see a working consumer model.

This coming from NVIDIA?
By carniver on 5/3/2010 1:19:34 PM , Rating: 2
Conventional serial CPUs are simply too heavy (consume too much energy per instruction) to fly on parallel programs and to continue historic scaling of performance.

CPU and GPU designs are rapidly converging, with CPUs increasing in # of cores and adding more SIMD instructions while GPUs having more complex pipelines to handle more advanced shader programs. Fermi is now so complex that it can partially do out of order scheduling like a CPU, and is infamous for its power consumption

But neither can replace the other, it's obvious. Both has their own set of advantages and shortcomings. CPUs doesn't have as much texture bandwidth or floating point power as a GPU to do rasterization, and GPUs don't have the flexibility to run code with lots of simple branches as a CPU to run the most general programs. NVIDIA is making a moot point here. What is happening is that Intel is trying to kill NVIDIA, and vice versa!

By Jaazu on 5/3/2010 3:21:07 PM , Rating: 2
can have a processor for sound, and one for graphics and one for I/O and another for general processing... and we could call it...hmm... Amiga sounds friendly.


Moores Theory
By knowom on 5/3/2010 6:05:35 PM , Rating: 2
It's more theory than law I think Nvidia is correct it will slowly grind to a halt unless something new and very innovative happens.

Also they make a good point GPU's seem to be taking over a lot more traditional CPU's roles and CPU's are doing the reverse somewhat, but GPU's are doing it at a steadier pace with GPGPU and will only continue to do so further.

By driver01z on 5/5/2010 1:46:52 PM , Rating: 2
My basic thought is - development will continue in the areas that are profitable. And where does profit lie in the future? Is it in having a faster CPU/GPU for a desktop at reasonable cost (as in the past)? Is it in having processors for mobile devices? Is it in having swappable processors on a future generic PC where you can add more processors as simply as adding an external hard drive? Maybe its only in specific industries and gaming enthusiasts - who else wants/needs a powerful computer except for games or actual computing-intensive work?

One of these companies could make a 100-core processor or begin to break the 4GHz barrier - but how much sales would this generate?

For me, my personal interests lie primarily in gaming. If a gaming experience is available that is critically acclaimed and offers new levels of immersion and entertainment - if its something I can afford and want to support, I'll support it, regardless of what format it takes - be it some new mobile tech, a new console - I won't care if its uses Windows or how many cores it has or if its backwards compatible with my previous software, etc. I think this is what these companies should focus on - what experiences do consumers want? Most consumers don't care about the specific tech behind it as long as its entertaining - therefore, if there are roadblocks in technology, like ability to increase processor speed or make threaded programs - I don't think it should matter too much - just find a different way to be entertaining. Of course it may very well be that by doing R&D on bio/nano-processors that you could provide a chip that can do photo-realistic virtual reality - if that route is possible, then certainly go for it - go for the best experience that's actually possible, whatever that may be. I'm thinking now these companies are in a state of determining what is possible vs what is not worth pursuing.

Chip Guy & Compiler Guys
By SkierInAvon on 5/6/2010 10:12:56 AM , Rating: 2
I agree with some of the earlier comments. This (future performance) is not just about the HW architecture.

If you intend to see real future performance gains in CPU's you'd better have the software compiler guys in the room at the time they start designing.

Fact is, the code still has to be compiled to run, and it best be optimized for the Chip architecture that the HW/SW-Compiler guys say is best.

Performance Prophesy
By lotharamious on 5/3/10, Rating: 0
"Can anyone tell me what MobileMe is supposed to do?... So why the f*** doesn't it do that?" -- Steve Jobs

Latest Headlines
4/21/2014 Hardware Reviews
April 21, 2014, 12:46 PM
4/16/2014 Hardware Reviews
April 16, 2014, 9:01 AM
4/15/2014 Hardware Reviews
April 15, 2014, 11:30 AM
4/11/2014 Hardware Reviews
April 11, 2014, 11:03 AM

Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki