backtop


Print 161 comment(s) - last by Yojimbo.. on Feb 24 at 2:41 AM


Professor Palem works with his graduate researchers to test his new chip design. His new probabilistic chip offers seven times the computing speed, at 1/30th of the power, with only minor losses in accuracy.  (Source: ScienceDaily.com)
New chips consume less power, and are many times faster than current offerings, while retaining acceptable accuracy

Suppose you want to use your CPU to calculate your savings, which consists of $13,000.81, spanning several accounts.  Now you could precisely add every digit of various accounts and get your precise balance, but at a significant computing cost.  However, as an alternative, you could use weighted probabilistic calculations that were faster, but less accurate.  While obtaining $50,000.81 would be a very undesirable result, obtaining $13,000.57 would be "close enough" in most cases.  Thus applying greater weight to the tens, hundreds digits, and so on would yield a good calculation -- thus is the nature of probabilistic hardware computing.

With the limits of traditional computing being pushed to the brink, Moore's Law may soon expire.  CPU manufacturers are preparing to launch 32 nm circuits late this year or early next year, and 22 nm is also in the works.  However, past about 10 nm, using traditional light etching techniques begins to fail.  Some are saying that this calls for ditching the traditional CPU entirely and adopting an entirely new design, such as optical computing or quantum computing.  However, these options would be expensive and risky.  The alternative is to repurpose the wheel -- make silicon computers that do the job better with the same number of transistors.

The idea for probabilistic computing has floated around for a while, and was born out of such a mindset.  It has been largely pioneered and developed by Rice University Professor Krishna Palem.  On Sunday, Professor Palem announced the results of his first chip and they're nothing short of groundbreaking.

His probabilistic CPU chip ran seven times as fast as traditional circuits and consumed only 1/30th of the energy.  The results match or even exceed those predict by his mathematical models and computer simulations.  He states, "The results were far greater than we expected.  At first, I almost couldn’t believe them.  I spent several sleepless nights verifying the results."

Professor Palem's chips could revolutionize fields such as computer-generated graphics, content streaming, and other applications that would accept a tradeoff of precision for increases in computing speed and decreases in power.  Many experts in these fields which had been previously reticent about the idea of probabilistic computing have been convinced by Professor Palem's results.

Al Barr, a computer scientist at the California Institute of Technology, is among the new believers.  He acknowledges former doubts, stating, "Initially there was definitely a lot of skepticism."

However, Barr and his colleagues are now planning to test new graphics software using Professor Palem's design or other upcoming probabilistic chips.  Such designs might allow next generation cell phones and laptops to run for days more on a charge, while running significantly faster.  While some artifacting might occur, it would only be a few missed pixels -- the overall image would remain.  The human brain's imaging abilities can fill in most of this missing information, says Professor Palem.  As he puts it, "In effect, we are putting a little more burden on the CPU in our heads and a little less burden on the CPU in our pockets."

Intel and other CPU makers have expressed interest in probabilistic designs, for lack of a clear die shrink solution in the long term.  Professor Palem's results have them very excited.  Shekhar Borkar, director of Intel’s Microprocessor Technology Lab lauds the work, stating, "This logic will prove extremely important, because basic physics dictates that future transistor-based logic will need probabilistic methods."



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Intel beat them by 15 years.
By slashbinslashbash on 2/9/2009 8:51:30 AM , Rating: 5
Pentium FDIV bug FTW!




RE: Intel beat them by 15 years.
By Master Kenobi (blog) on 2/9/2009 9:04:58 AM , Rating: 2
Yea, funny that. Regardless, while this seems ok on paper. The result in "close enough" calculations would be disastrous for most purposes. Just ask NASA, Banks, or the Military if they are off by .00001 on a number.


RE: Intel beat them by 15 years.
By freeagle on 2/9/2009 9:08:42 AM , Rating: 3
That's why they don't state it's a general purpose CPU


RE: Intel beat them by 15 years.
By Cogman on 2/9/2009 10:44:37 AM , Rating: 1
So... What good is it then? "Here you go, this CPU is generally close enough to the right answer in its calculations. Use it for something!"

So now, what can you use a device for that you KNOW will give wrong answers? games? nope, Scientific calculations? HA HA no. Business computers? Yeah right. Running an OS? Good one. "Store bit 1 in address, ummm Is 0x10101 Close enough?"

Computers work so well because they are accurate. Take that away from them and all the sudden you have a worthless machine that can't do squat.


RE: Intel beat them by 15 years.
By SignoR on 2/9/2009 11:06:18 AM , Rating: 5
quote:
So... What good is it then? "Here you go, this CPU is generally close enough to the right answer in its calculations. Use it for something!"


Think cell phones. Giving the processor a lower voltage than is needed for some of the less significant bits can have great power saving properties, while only marginally affecting sound quality.(which is highly compressed to begin with)
Can you hear the difference between 0x11011001 and 0x11011010 on a cheap cell phone speaker? probably not.


By Alexstarfire on 2/9/2009 11:43:50 AM , Rating: 2
I wouldn't know since I have no idea what is actually being computed/played at any given moment.


RE: Intel beat them by 15 years.
By Cogman on 2/9/09, Rating: 0
By masher2 (blog) on 2/9/2009 2:06:48 PM , Rating: 2
A cell phone already has vast blocks of transistors -- if not entire chips - devoted to signal processing. Revamping these to use probabilistic computing techniques would not require any sort of "supervisor" chip to route calculations appropriately. It would simply reduce energy usage and, by providing more computing power, increase overall sound quality.


RE: Intel beat them by 15 years.
By Lerianis on 2/9/2009 3:20:59 PM , Rating: 2
Now THAT makes more sense: two chips - one probabalistic, one EXACT, for calculations that need exact answers.

That sounds like a better way to do things than just having one or the other in a machine.


RE: Intel beat them by 15 years.
By Myg on 2/9/2009 8:14:15 PM , Rating: 5
Great, the feminist agenda has done it again...

First Hyperthreading/multicores (multi-tasking), now cant-make-up-my-mind probabalistic chips, when will it end?


By kontorotsui on 2/10/2009 3:48:35 AM , Rating: 1
Give this a 6!


By segerstein on 2/9/2009 2:07:34 PM , Rating: 2
Yes, but usually cell phones and other multimedia devices use lossy codecs. Corrupt a few bits in a ZIP file or an MPEG2/4 file and you get the snow ball rolling.

Even if Moore stops somewhere, there are still two very good solutions:
- thin clients - mobile devices and laptops stay the same size, the really hard work is done in clouds and sent back
- multilayer chip designs with thousands of layers - to have a CPU cube, not a chip (slice). Of course, heat dissipation would be a problem, but with low power designs that could be mitigated.


RE: Intel beat them by 15 years.
By taber on 2/10/2009 12:01:05 AM , Rating: 2
quote:
Can you hear the difference between 0x11011001 and 0x11011010 on a cheap cell phone speaker?


Depends if that's little endian or big endian.


RE: Intel beat them by 15 years.
By segerstein on 2/11/2009 8:18:05 AM , Rating: 2
Good joke, but only multibyte "words" can be big or little endian ;)


By Shining Arcanine on 2/10/2009 1:46:25 AM , Rating: 2
Have this mutate the right number in the right way and you could end up with an infinite loop, a OS crash, etcetera.

The cellphone example doesn't prevent this probability. :P


RE: Intel beat them by 15 years.
By hameed on 2/11/2009 7:57:14 AM , Rating: 2
quote:
Can you hear the difference between 0x11011001 and 0x11011010 on a cheap cell phone speaker? probably not.

Of course!

The first sounds like Zero EX One One Zero One One Zero Zero One
While the second is like Zero EX One One Zero One One Zero One Zero


RE: Intel beat them by 15 years.
By MrTeal on 2/9/2009 11:16:48 AM , Rating: 5
quote:
So now, what can you use a device for that you KNOW will give wrong answers? games? nope,


Actually, games (and graphics in general) would be a great application of this. If in every frame even one or two pixels had slightly the wrong colour, or one was black, it wouldn't be a huge deal, especially if it allowed the GPU to be faster, cost less, and use 10W instead of 300W.

Even many scientific calculations would benefit from a huge increase in speed for the tradeoff of some precision. Stop being so alarmist and narrow-minded.


RE: Intel beat them by 15 years.
By afkrotch on 2/9/2009 12:10:04 PM , Rating: 1
Too bad game developers and gpu manufacturers are looking at offloading AI/Physics onto the GPU. Just good enough isn't going to cut it.

I have played plenty of games that required shooting at a single pixel or two. Now I might have these random dots on my screen. Fck that shiz.

In a game something different sticks out. Now a few missing pixels would stick out and annoy the hell out of me, when it's not an actual target.

quote:
Even many scientific calculations would benefit from a huge increase in speed for the tradeoff of some precision. Stop being so alarmist and narrow-minded.


Yes, having scientific calculations coming up flawed, but faster is very good for the scientific community.

That new drug we just made to cure AIDS, now actually causes cancer and your eyes to fall out. Just a minor miscalculation from these new probabilistic chip. No worries, it's "just good enough."


RE: Intel beat them by 15 years.
By callmeroy on 2/9/2009 12:35:33 PM , Rating: 2
Man you are horrible a bullshitting.

I've been playing games since I was 8, I'm in my 30's now and still play -- FPS, MMO's, RTS...name it.

NEVER once had I ever had either the incredible vision or the hand eye coordination to aim at a SINGLE PIXEL on the screen. Unless you play your games an insanely resolution size, you have some damn good peepers to notice a single PIXEL out of thousands.

;)


RE: Intel beat them by 15 years.
By masher2 (blog) on 2/9/2009 2:13:05 PM , Rating: 3
quote:
NEVER once had I ever had either the incredible vision or the hand eye coordination to aim at a SINGLE PIXEL on the screen
Actually, on some old C64 or AppleII games it wasn't that hard...of course the resolution was a paltry 160x200 or even less.

Now that games tend to have 100X the pixels, aiming at a single one is essentially impossible, I agree.


RE: Intel beat them by 15 years.
By SlyNine on 2/9/2009 2:55:44 PM , Rating: 2
Well he's still wrong , Play Americas army, IC and try to spot a sniper on the SE hills, if you don't your dead. They are the size of 1 or 4 pixels. Even at 1920x1200 aliasing is still an issue.

But I don't think this one odd pixel would stay for any amount of time, probably the next screen would correct it anyways.

Besides one of the reasons GPUs have not been used in the past is because they lack precision and have computing errors right? Isn't this just the same thing.


RE: Intel beat them by 15 years.
By murphyslabrat on 2/9/2009 2:57:08 PM , Rating: 5
While shooting at a single pixel (or a cluster of four or five of them) is not outlandish, they will be replaced at a rate of (greater than) 60 times per second. Meaning, that unless you have incredible reflexes, that one misplaced pixel would never be enough to kill you.

Furthermore, if having an occasional sporadic pixel would enable far more pixels on the screen, with a faster rate of replacement (frames per second), that sounds like a win to me!


RE: Intel beat them by 15 years.
By mindless1 on 2/9/2009 6:08:55 PM , Rating: 1
Keep in mind the typical FPS limit with good hardware is the LCD.

What if the one bit that's off isn't a pixel. What if it's the locational coordinate of your target, that the game things it's on the left side of the screen but it's on the right because the GPU didn't position it correctly?


RE: Intel beat them by 15 years.
By freeagle on 2/9/2009 8:08:18 PM , Rating: 2
The calculations produce slightly inaccurate results, they don't do wild guesses leading to absolutely unreasonable numbers. Which your left screen coordinate being on the right would be. And you could always pair the probabilistic chip with a normal one doing the exact calculations you need. Again, they did not state the chip is general purpose, but it has huge advantages if you know how and when to use it.


RE: Intel beat them by 15 years.
By mindless1 on 2/9/2009 6:06:25 PM , Rating: 3
I've played games where your crosshair was actually a single pixel, what if you suddenly have no aiming system?


RE: Intel beat them by 15 years.
By masher2 (blog) on 2/9/2009 6:19:57 PM , Rating: 2
I think you mean a single pixel wide, not a single pixel. In any case, you're misinterpreting exactly what error would be introduced. For a pixel to "disappear", its value would not only have to be incorrect, but precisely match the background value...and continue to match it, frame after frame. In most cases, a pixel would just be slightly the wrong color, not off entirely.

Furthermore, probabilistic algorithms would be used for lighting, terrain, shadows etc, but not fixed sprite overlays like aiming crosshairs. I can't see those being affected at all.


RE: Intel beat them by 15 years.
By Meadows on 2/10/2009 3:41:12 AM , Rating: 1
No, a single pixel. I've only had one game that did this, but there are probably more - the most popular example is Max Payne. I'm not certain, but I believe they made it 1 magnitude bigger for Max Payne 2, still small though.

Newer games tend to suffocate you with extraordinary crosshairs that barely let you see through. No, opacity settings are not always the solution.


RE: Intel beat them by 15 years.
By mindless1 on 2/12/2009 12:09:59 AM , Rating: 2
Not necessarily, you and others keep thinking only about the pixel's value, not it's location.


By Chernobyl68 on 2/9/2009 7:18:49 PM , Rating: 2
depends on how far away you're sniping someone...and how bad you want that "head shot!"


By Lugaidster on 2/9/2009 4:09:25 PM , Rating: 2
Games already use aproximations on many of their calculations. Besides, as the OP have already noted, the framerate would be so high that a bad pixel on one frame wouldn't be there the next so you wouldn't notice. The human brain has one of the best noise filters known to man, so it will most likely filter most of the noise on the frames. You do remember how movies used to look on the theatres, full of garbage, yet nobody complained?

Anyways, I hope to see this on GPUs, I just expect that CPUs don't follow suit. We will still need exact calculations for many things.


By otispunkmeyer on 2/10/2009 4:08:27 AM , Rating: 2
yes final calcs need to be accurate as possible.... but... initial investigations dont need to be. so long as the error doesnt compound or make CFD/FEA solutions diverge then "roughly right" will be good enough for preliminary, interim work.

of course the accuracy needs to step up at the end, but think of all the time you can save in the earlier stages by getting numbers that are in a very close ball park

besides, things like CFD are only "roughly correct" anyway because of all the assumptions you have to make in order to keep resource usage down and calculation time down, the solutions are never spot on accurate, they are only a prediction. what does it matter if that prediction is out a little? its still in the right area.


RE: Intel beat them by 15 years.
By geddarkstorm on 2/9/2009 2:28:27 PM , Rating: 2
I don't see how this could fly in the science community, as we try our best to lower and control every possible source of error - this CPU would inherently bring an unknown quantity of error into all computed calculations, that is absolutely unacceptable for science applications. It is always better to do something slow and accurate, then fast and imprecise in the science world.

Unless they could develop software that could easily fix errors in the output stream. No idea how that could work, or if it's even possible, but I wouldn't be so surprised if someone developed something.

Still, I have to fundamentally disagree with you - this is not for science as it is now, except science experimenting on probabilistic CPUs.


RE: Intel beat them by 15 years.
By masher2 (blog) on 2/9/2009 2:36:51 PM , Rating: 5
quote:
this CPU would inherently bring an unknown quantity of error into all computed calculations
Actually, it would bring in a known source of error.

quote:
It is always better to do something slow and accurate, then fast and imprecise in the science world
Nothing could be further from the truth. If you're controlling the flight of a missile -- or even a personal automobile -- getting a "close enough" course correction in real time is far more important than a fully precise value an hour later.

Or in biological modelling, finding a drug that eliminates 99.9% of a virus today is better than, 50 years later, finding one that eliminates 100%. Or economic modelling, where finding almost the appropriate interest rate today is far better than the perfect answer, five years later.

In any case, you're missing the most important point. Any type of complex modelling will have vast amounts of error involved already. A chip that is much more powerful can allow you to remove many simplifying assumptions, which means more accurate results, despite small amounts of imprecision within the calculation process.


RE: Intel beat them by 15 years.
By Jeff7181 on 2/9/2009 2:49:52 PM , Rating: 3
This brings up another question. Are the "errors" predictable? For example, if I run the same calculation on 100 of the same model processors 100 times, can I expect the same results every time from every processor? If so, this will create far less havoc for scientific use since the error would be a constant.


RE: Intel beat them by 15 years.
By freeagle on 2/9/2009 2:59:29 PM , Rating: 2
Such results would actually be useless. You can create predictable error with current CPUs. If they claim is true and the processor is truly probabilistic, then each time you run the calculation, you get a slightly, unpredictably different output. Which sometimes in not unwanted, maybe even desirable.


RE: Intel beat them by 15 years.
By geddarkstorm on 2/9/2009 3:14:15 PM , Rating: 3
It could be very useful for developing encryption keys, for instance ;). It could be a great way to get a random number seed for a standard CPU to use in functions, which are usually bad with probabilities and randomness. Therefore, pairing this together with normal CPUs opens a wide door to all sorts of useful and much faster calculations.

Solely by itself though, I would argue since error will always increase the moment it is reused without compensation (such as from a normal CPU), and a CPU usually does many calculations on the same input stream before output, by the end of a complex function the total error could be so wild it'll be impossible to use the result with confidence.

You can do this yourself. Do a complex series of calculations and randomly change a few minor digits along the way, say the 0.001th digit, as if in a system of three significant figures. You'll see once you throw in multiplication and division, after only a few iterations you'll get quite different results from simply rounding (and at what significant digit rounding is done can give different results than carrying out to a higher degree of precision before rounding, again showing the point of error building on error). Some systems will handle this better than others; and if this isn't repeatable, because you are randomly changing that digit as you go, then multiple calculations will have to be done and statistically analyzed every time. Now that would slow things down! But, again, this will depend completely on the application, and sometimes this random variance is a very good thing in certain systems, like encryption.


RE: Intel beat them by 15 years.
By clovell on 2/9/2009 4:04:48 PM , Rating: 2
Good point there - error would be compounded and propagate quickly through complex calculations.


RE: Intel beat them by 15 years.
By ekv on 2/10/2009 2:02:11 AM , Rating: 2
Not so good for developing encryption keys. You want an algorithm that diffuses (encrypts) information and then can recover (decrypt) that exact information.

Kind of like the situation when you're compressing a text file. It must be exact. If you're ripping a sound file AND can tolerate some fuzziness in the playback, then you select something like MP3. [Which is not to say MP3 is probabilistic, just that it doesn't give an exact reproduction, like WAV does. I think I said that correctly].


RE: Intel beat them by 15 years.
By geddarkstorm on 2/9/2009 2:54:10 PM , Rating: 2
Well, how can you know the amount of error the CPU will bring? Will it not be different with every calculation, some having more error than others, since it's probabilistic and sometimes it'll strike on the mark and other times be quite afield? That means you cannot know exactly which calculations are affected in what manner, correct? You could do a voluminous amount of data sets paired with a more precise processor to get the average variance range, and then factor that in to all calculations. As long as that variance range was not too wide, it might be acceptable.

I'm not quite sure your examples are all that correct. It will depend on the degree of variance and how well it can corrected for. You forget that the error of the CPU will compound on itself. If it is off by a fraction of a degree on one calculation, takes that result and calculates from it again, it'll have an increased error on the next calculation based on the error from the first result being reused. This could easily spin wildly out of control without some necessary way to compensate. Perhaps there are ways, this is beyond my field.

Finally, as for biological modeling, there is a lot of error already in there, which is why it usually doesn't work; as even the /slightest/ error, will totally destroy the system, and all assumptions must be presently, and accurately accounted for - so no, this would not increase accuracy without some software or other method to correct for random error. Actual empirical experiments always win out, our best in silico methods are still very error prone, too much so to be used with any reliability. Add in a processor that is increasing variance with every calculation, and it doesn't matter how much faster it is. We usually generate multiple structure sets when using empirical data for solving a biological structure - if we cannot depend on the results of each data set, which already has inherent variance hence why so many have to be made and compared, then we cannot publish our data with any reliability - the statistics will be very skewed.

On the other hand, with the proper corrections, if they can be made, then this processor would be very useful, especially if it can naturally weight some data and probabilities over others. We will have to see as this thing is developed, and if they can keep the variance low enough while the speed and energy usage around the same.

But as it is right now, this faster CPU would never allow us to quickly come up with a drug that cured 99% of viruses - we'd develop a drug that did nothing because the binding and structure dynamic calculations would be all wrong. Nor does drug development work like that in the first place. Maybe if we were lucky a calculated drug would do something - but as it is, drug screening goes through up to 20,000 already calculated and synthesized drugs per day to find a set that works, which then get analyzed further biochemically. No CPU can replace that, especially if we cannot be completely confident on its data.

Error expounds upon error, it's hard enough to control for that already in the literature; it's one reason science is so slow and things must be repeated in new ways.


RE: Intel beat them by 15 years.
By masher2 (blog) on 2/9/2009 3:19:39 PM , Rating: 3
quote:
Well, how can you know the amount of error the CPU will bring? Will it not be different with every calculation,
Most likely. But the maximum error will still be quantified, which is the important thing. You still know the height of the error bars, even if your actual error is less on any given calculation.

quote:
You forget that the error of the CPU will compound on itself. If it is off by a fraction of a degree on one calculation, takes that result and calculates from it again, it'll have an increased error on the next calculation
This is no different than the floating point calculations we perform today. They never yield a 100% accurate result, and those errors ultimately do compound and grow as you concatenate calculations.

quote:
Add in a processor that is increasing variance with every calculation, and it doesn't matter how much faster it is
No, you're still missing the point. Biological simulations (among most other complex ones) contain huge numbers of simplifying assumptions. Each one introduces error. A chip that is far faster can allow you to remove many of those simplifications, which means more precise results, even if the chip introduces a small degree of error itself.


RE: Intel beat them by 15 years.
By geddarkstorm on 2/9/2009 3:54:52 PM , Rating: 1
All of what you are saying are assumptions as well. The CPU here will introduce random error, error that is different every time and cannot be predicted, since it's probabilistic by its very nature. In short, we cannot know for sure the outcome unless we put this thing to the test. And you cannot, absolutely cannot say you'll know the maximum error for certain, only a range it could be in (our error bars would have error bars), because the error is random and some data may have very low error at the end and repeated will have high error. But how do you know which result was the low error and which was the high error one? And because it's random, it's not like it'll be weighted towards low error results over high error results - though perhaps they can design it to be that way, in which case statistics can be used to fix this issue (which will require more resources however).

If you think getting rid of simplifying assumptions will make things more precise, I completely disagree. Those complex calculations will, if each successive calculation is prone to more and more error as they are compounded by a probabilistic processor, probably yield far less precise results in the end, because of the increased number of calculations where error will be introduced because of the very nature of the CPU. Error will enter unpredictably; that is completely unacceptable. We have a general idea how simplifying assumptions affect errors, and we can mitigate that with statistics and multiple data sets. How can we even begin to do that with this thing, where the more calculations it does the larger its error will become?

I'm afraid I will just have to disagree with you completely on this matter. Unless we put this processor to the test and see with of our assumptions about its nature are correct. Furthermore, perhaps there are ways, hardware and/or software, to abrogate the compounding error that will occur in every single calculation this processor does (if it's truly probabilistic). If such ways exist and are employed to perfect the processor, then maybe it'll be great. Otherwise, one could use very simply hardware to error check it, perhaps. There are probably ways; but by itself, this thing could never be suitable for science in my opinion, which is completely just an opinion, until we empirically test this thing.


By masher2 (blog) on 2/9/2009 5:51:41 PM , Rating: 3
quote:
I'm afraid I will just have to disagree with you completely on this matter
You're welcome to disagree, but what you don't understand is that probabilistic algorithms are already being widely used in scientific calculation. They're simply being done in sofware, not directly in hardware. So claiming that probabilistic calculations can't be useful is rather silly.


RE: Intel beat them by 15 years.
By ranutso on 2/10/2009 11:31:05 AM , Rating: 2
quote:
You forget that the error of the CPU will compound on itself. If it is off by a fraction of a degree on one calculation, takes that result and calculates from it again, it'll have an increased error on the next calculation based on the error from the first result being reused.

Well, sorry to disagree with you but, in fact, it is almost the exact opposite. When you do fixed mathematics calculations then you do propagate the error generated on the last operation since you are going to use that number to evaluate the next.

If this CPU is truly probabilistic, then it won't always generate the same error and that is important. It might be off for .01% less on this evaluation and .01% more on the next, cancelling the error. This is how probability works. If you know the expected error your calculation will result, then you know how to work your numbers in order to get it right. That's how GPS gives you a good enough location on your phone, for example (GPS is all about statistics, by the way).


RE: Intel beat them by 15 years.
By clovell on 2/9/2009 4:00:52 PM , Rating: 2
Ah, I wouldn't go that far. Most of our models don't suck because we oversimplify them, but moreso because we don't entirely understand what we're doing with them and because we don't have the quantity of data we need.

Furthermore, the source of error may be known, but until we understand its nature better than simply knowing where it comes from, it's difficult to speak to its robustness in different applications.

Last - for biological modeling, you've got to understand that isn't exactly what we're dealing with. Even a conventional processor would begin converging pretty quickly. The efficiency would not be on par, but after running each for a week, you'd probably see something more like a 90 - 95% kill rate from a conventional processor versus 99.9% for a probabilistic one.

In the end, there are many applications in which these would be useful, and I think as we employ them in applications and understand how sensitive and/or robust they are, we'll be better able to utilize them.


RE: Intel beat them by 15 years.
By masher2 (blog) on 2/9/2009 6:01:25 PM , Rating: 2
quote:
Ah, I wouldn't go that far. Most of our models don't suck because we oversimplify them
Take a look at a climate model sometime. They run with grid sizes of one million sq km or more (considering the entire unit as a uniform block), remove dozens of pertinent factors such as the effects of clouds and the hydrologic cycle, and usually include such simplfying assumptions as a flat earth, no day/night cycle, vastly simplified radiative and convective models, etc. They just don't have anywhere near enough computational power to not make these simplifications.

Even in something like protein folding, there are a great deal of simplifying assumptions made, ones that, had we orders of magnitude more processing power, would not be required and would therefore allow more precise results.


RE: Intel beat them by 15 years.
By rhuarch on 2/9/2009 7:21:31 PM , Rating: 2
One could argue that the inherent element for error would actually improve scientific models since "nature" itself seems to be based on a probabilistic model. (Quantum Mechanics, DNA, Chaos theory?) Introducing that slight element of variability might actually produce better scientific models that more closely mimic nature without having to waste extra processing power by writing that variability into the model with actual code.


RE: Intel beat them by 15 years.
By mindless1 on 2/9/2009 6:14:58 PM , Rating: 2
You're implying that we wouldn't have the realtime processing capability for the missile, (as if we can't already control them?), or that it would arbitrarily take 50 years minus a day longer to compute something else. It is unlikely we could input the data to keep 50 years worth of future processor improvements busy within a period that would make the day to process it, significant.

Today we are more of a bottleneck than our computers. I agree there could be great gains in some applications, but we are quickly getting to the point where slower processing isn't the limiting factor. On the other hand, reducing cost, size and power consumption for the required processing performance per task may be far more useful.


RE: Intel beat them by 15 years.
By masher2 (blog) on 2/9/2009 6:26:59 PM , Rating: 2
quote:
You're implying that we wouldn't have the realtime processing capability for the missile, (as if we can't already control them?)
We can control a missile in the air, one that has a preset target and doesn't have to maneuver around dynamic obstacles.

We can't reliably control a car on a highway, though-- even when we load up the vehicle with ten times the processing power as would be feasible in any commercial system, and include radar, microwave rangefinders, GPS, and multiple video cameras. A probabilistic cpu very well might change that, and allow precise autonavigation from nothing but video images....something that ultimately be a requirement if autopiloted vehicles are ever to become a reality.


RE: Intel beat them by 15 years.
By mindless1 on 2/12/2009 12:08:05 AM , Rating: 2
I am not convinced that inability to control a car comes from lack of processing power, I very much doubt that it is. Rather, the conditions of driving on the road have more variables from road conditions, other drivers, lack of sensor ability, integration of driving logic, and especially that pesky detail of the lawsuits when it doesn't work right.

Video images alone won't do it, we still need the core logic behind it all - the humans are the weakest link. If it were only a matter of processing power, they'd have 100% success rate in all situations if only they drove slow enough and yet even this is incomplete as it eliminates the factors of vehicle control at elevated speed and worsening road conditions which could be calculated with fairly minimal overheat, IF the sensors could detect it.

Using only pictures simply won't work. Blinding light is one issue. Another is the angle of approach of the road ahead within a distance the automobile could safely stop.


By mindless1 on 2/9/2009 6:04:57 PM , Rating: 2
You are suggesting that the only data involved with image processing is what the value of one or two pixels is. This is not correct. An off-value early on could completely change larger portions of what you see on-screen. Those who overclock video cards know this, the visual errors are obvious while a couple of pixels wouldn't be except with comparative screen captures.

Granted, within the same core different logic might be used for different sorts of processing, I'm not sure where some people were getting the idea it would require a second discrete processing chip to have both probable and accurate results.


RE: Intel beat them by 15 years.
By freeagle on 2/9/2009 11:34:57 AM , Rating: 3
quote:
games? nope, Scientific calculations? HA HA no


Games? Of course. As the above poster mentioned, 1 or 2 pixels going wrong is not noticeable, plus, every calculation that needs to make something less precise, or blurry (anti-aliasing, depth of view, soft-shadowing) would be implemented naturally, several times faster, not slower. Also AI could behave much more realistically. The bot wont always aim for you head exactly, he will always be a bit inaccurate, and thats what you really want it to behave like. But unlike current CPU's, you'd get that no for free.

Scientific calculations? Again, of course. When simulating, you need to inject a sort of random noise in it, because the process wont go the same in the real world every time. Something you can achieve with current CPUs only with an overhead


By tastyratz on 2/9/2009 12:16:07 PM , Rating: 2
Precisely.
There are certain applications already stated where this would be phenomenal in practice making a very large difference.
At the end of the day we will always have our general purpose processor for precision tasks and day to day work, this would be better at specific non critical tasks.
I can see this as a way to micronize voip considerably, and other less fussy equipment. Cheaper media devices come to mind such as HD media players, etc.

Less electricity means less heat so packaging restraints are lessened as well.

Double the battery life on a laptop playing Crysis? Sign me up


RE: Intel beat them by 15 years.
By Cogman on 2/9/2009 12:53:45 PM , Rating: 3
The amount this is off is NOT random. In fact, there is probably a well defined pattern of how far it will be off given a certain input.

1 or 2 pixels not noticeable? You ever have a monitor that has 1 or 2 pixels dead on it? A nice florescent green color? Yes, you do notice it. And no, anti-aliasing would not be a natural consequence of this. Rather, every pixel will be off of the correct color by some amount. This will result in a noisy, aliased screen. Yeah, that's exactly how I want to game, Lets just add grain to all of our game to make them more realistic!


RE: Intel beat them by 15 years.
By freeagle on 2/9/2009 1:09:29 PM , Rating: 2
If the amount off is not random, then they achieved nothing, because predictable amount off can be done with current processors.

1 or 2 pixels constantly being wrong in the same way is noticeable. 1 or 2 pixels, which positions change every frame and they always look a bit like they should is not.

AA would be natural if they are really probabilistic chips.


RE: Intel beat them by 15 years.
By Cogman on 2/9/2009 1:15:11 PM , Rating: 2
Go look up random number generators.

Basically, one of the only sources of truly random numbers is available through quantum decay.


RE: Intel beat them by 15 years.
By freeagle on 2/9/2009 1:29:53 PM , Rating: 2
If the result of adding 1.001 + 1.003 will always be 1.005 on these chips, then again, they've done nothing. But I doubt this is the case.
If the the difference is dependent on a pseudo-random number taken from a pseudo-random number generator, then again nothing new, because again, we can do that now.

My guess is they somehow managed to make the current for the least significant bits unstable, leading sometimes to 0 or 1. And this effect is hardly predictable, doesn't have a period and would lead to quite random behavior, but results nearly the ones you wanted.


RE: Intel beat them by 15 years.
By Cogman on 2/9/2009 1:47:03 PM , Rating: 2
Hardly. More likely it is a situation where they pass the signal through something to the effect of a ripple adder for the different operations and then just pull out the answer every clock cycle or so, rather then making sure the answer has had time to fully propagate down all the gates.

Yes, it would be faster, and yes the most significant digits would be accurate while the least significant digits would be less then accurate. Trust me, this isn't a new random number generator, if it where, their chips would be MUCH more interesting to the scientific community. Random numbers are hard to come by.


RE: Intel beat them by 15 years.
By freeagle on 2/9/2009 1:57:57 PM , Rating: 2
If you need something to pass the signal though first to alter it, then you need something extra which needs energy. So how come they could cut the energy needs to 1/30th?


RE: Intel beat them by 15 years.
By Cogman on 2/9/2009 2:07:05 PM , Rating: 2
Computers work on clock cycles. The basic idea is to give the data to the transistors, wait the needed clock cycles for the system to fully propagate through the transistor, then move the data at the end of the transistors to the next set of transistors for processing.

What I am guess he is doing is he creates a ripple circuit. presents the two signals to the ripple circuit (so 2 + 4) and then waits however many cycles he deems good enough to be accurate enough, and then pulls out the data from the end, rather then ensuring the data is valid at each step, he just applies it to every step at once and hopes it is right in the end.

This would save on power, because you could potentially push a sqrt through in 1 clock cycles verse the 2->40 (depending on which instruction you use) it takes on a modern CPU. That means the CPU runs faster with little more power requirement, and requiring less circuitry and lines in the end to ensure correct answers.


RE: Intel beat them by 15 years.
By freeagle on 2/9/2009 2:15:48 PM , Rating: 2
If so, I don't see how is that a probabilistic chip


RE: Intel beat them by 15 years.
By icanhascpu on 2/13/2009 1:23:19 PM , Rating: 2
There is no such thing as " truly random". Simply things we understand and things we don't.


RE: Intel beat them by 15 years.
By Cogman on 2/9/2009 1:27:27 PM , Rating: 2
BTW, how would AA be a natural consequence. Images aren't calculated with ray tracers, they are calculated with rasterization. It might be a natural consequence if it allows ray tracing to be feasible.

Anti-Aliasing is more then just making things blurry, it is computing pixel values on edges (not everywhere) based on the pixels around. This would mix up the blue or the red or the green value of any given pixel without regard for the pixels around it.


RE: Intel beat them by 15 years.
By freeagle on 2/9/2009 1:33:10 PM , Rating: 2
I've given an idea below


RE: Intel beat them by 15 years.
By Cogman on 2/9/2009 1:49:24 PM , Rating: 2
Your idea is fundamentally flawed in the fact that you said "Just reprocess the same pixel more then once!" As I said before, AA relies on being able to get data about the pixels around the current one. This method doesn't somehow magically make that known to the processor.


RE: Intel beat them by 15 years.
By freeagle on 2/9/2009 2:05:29 PM , Rating: 2
Thats because under AA, you imagine process, not result. As I wrote, the process would need to be adapted to use what you have available.

You don't need to know the values of neighboring pixels, you can draw your pixel several times with 1/(several times) the intensity of the color, and blend the results. That way, areas further from the exact position of the pixel will have only fraction of the desired color and will get mixed with the values from drawing neighboring pixels.


RE: Intel beat them by 15 years.
By masher2 (blog) on 2/9/2009 1:57:04 PM , Rating: 5
quote:
1 or 2 pixels not noticeable? You ever have a monitor that has 1 or 2 pixels dead on it? Yes, you do notice it
When 1 or pixels on your fast-moving games image are color 0xFFEEAC, rather than their "true" value of 0xFFEDAC, you most certainly will not notice it...especially when, within 1/60 of a second, those pixels are gone and replaced with 1 or 2 more slightly inaccurate pixels, somewhere else.

Your eye notices stuck pixels on a monitor precisely because they don't move. They remain fixed, both in location and color.


RE: Intel beat them by 15 years.
By SlyNine on 2/9/2009 3:05:49 PM , Rating: 2
Like I said above, Don't video cards have a slight inaccuracy in data calculation anyways, When ATI's 9700 pro came out Floating point accuracy was a big deal because its shaders could calculate numbers in the decimal points ( Or something like that). Part of the reason GPU's were not as useful for scientific uses in the past was because they were not accurate enough. Now they are but it takes more time and lowers their potential GFLOPS to be that accurate, isn't this just the same thing???


RE: Intel beat them by 15 years.
By icanhascpu on 2/13/2009 1:27:00 PM , Rating: 2
quote:
1 or 2 pixels not noticeable? You ever have a monitor that has 1 or 2 pixels dead on it?


Except the fact that that comparison is ridiculous and has nothing to do with how it would work. DO you really think its going to stick a few permanent neon pixels on your screen?

Lame. Lame argument.

Also I guess you've never played Mass Effect.


RE: Intel beat them by 15 years.
By Meadows on 2/10/2009 3:47:42 AM , Rating: 2
I don't mean to pop your balloon but antialiasing makes things more precise. Aliasing is when the edges drawn are rough approximations, and don't represent the real spacial edges that are behind the calculations - ergo, you see an "alias" of the smooth straight edge that should really be there.

The popular and fast methods already work by approximation, but there are truer modes, such as oversampling (which you can force on your videocard) that give more natural results and a stomach-churning framerate.


RE: Intel beat them by 15 years.
By freeagle on 2/10/2009 5:33:57 AM , Rating: 2
quote:
but antialiasing makes things more precise.


as a result. The non antialiased version are precise in continuous manner. Antialiasing methods try to move this to discreet precision. But there is no true method. You'd have to oversample to infinity to get the true discreet results you need. The "approximation" I proposed would be good enough, while boosting performance. You'd probably wont see much difference compared to the infinite oversample and the approximation in fast changing scenes that games have.


RE: Intel beat them by 15 years.
By HollyDOL on 2/9/2009 12:19:05 PM , Rating: 2
Man, you obviously know nothing about computer algebra. Computers are everything but exact. In real world, worst case scenario you calculate with real numbers, if you are a bit in tech, you use complex numbers. Computers know _ONLY_ Natural numbers by default, everything else is calculation based upon them. Computer IS NOT capable of working with true real numbers since it is discrete device and real numbers are not discrete.

Few practical examples...
sinus (argument) = .... not exact value
sqrt (argument) = .... not exact value

float f = 32000.0f;
for (int i=0; i<32; i++) f-=1000.0f;
if (f != 0.0f) Console.WriteLine("Hmm, there is something weird... 32000 - 32x1000 is not 0");

Sum up random numbers in range of... let's say 1E-80 to 1E+80. Depending on what you do, you might end up with result 1E+80 which obviously is wrong. Can give explanation if anybody wanted.

Just to sum up these numbers as correctly as possible you have to use advanced structures (array won't do).

Honestly lots and lots of calculations could be run using these probability circuits. Even scientific ones, I can imagine (very depending on implementation) the probability circuit could end up with more exact value than current CPUs.


RE: Intel beat them by 15 years.
By Cogman on 2/9/2009 12:47:35 PM , Rating: 1
Sorry, but computers are really very exact. Yes, They only take a floating point accurately out to about 20 places... But come on your saying that basically "Well, they aren't exact anyways, so lets make them even less exact!"

This thing is saying lets make sure the big most significant numbers are ok and forget the lesser significant numbers. So if I have 1001 it would only guarantee that 1000 is correct. Can you imagine indexing an array and being consistently off by 1 or 2 places?

Why do you think it was a major bug with intel when their floating point calculator did exactly the same thing? (well, it might not have been the EXACT same thing, but it was similar enough).

And who's to say that these numbers are truly random? A statistical analysis that is consistently lower then expect is pretty worthless.

And HA HA HA! at you for saying that these could come up with MORE exact answers then a regular CPU. It shows that you truly have no clue how floats work in a computer. They aren't some magical random assortment of bits, they are measured. With the amount of bits given, the calculations down with current computers on floats are pretty much as close as you can get to the real answer given the limitation of the size of the float. There is no way that some probabilistic computer is going to all the sudden be able to come up with more accuracy with the exact same amounts of bits. It will always be off.


RE: Intel beat them by 15 years.
By HollyDOL on 2/9/2009 1:09:47 PM , Rating: 2
You just proven you know nothing.

Please write down 0.2 in binary here to prove your statement. It ain't that difficult number, isn't it? it's mere 1/5.


RE: Intel beat them by 15 years.
By Cogman on 2/9/2009 1:18:59 PM , Rating: 1
How on earth does that disprove or prove my statement? Guess what, in the probabilistic cpu 1/5 will be represented EXACTLY the same way as it will be in a finite machine. However, instead of doing a calculation an coming out with 0.20000001 you get 0.200004832.

How about you actually read what I post before asking me to "prove" myself.


RE: Intel beat them by 15 years.
By HollyDOL on 2/9/2009 1:54:19 PM , Rating: 2
I give up. Those who know something about computers know. You said that computers are very exact machines. With my 0.2 -> binary I was to prove you computers are very far from being exact.

Also you obviously mistake this processor for "random" (using "" because there is nothing like random with computers) numbers generator without having slightest clue about it's function. Nobody here has a clue how this new chip works and how precise it is.

Just live with fact only exact number computer is capable of is integer with its overflow limits.


RE: Intel beat them by 15 years.
By Cogman on 2/9/2009 2:21:37 PM , Rating: 1
I don't think I ever said anything to the contrary. If I did, I apologize.

My point was that sqrt(.2) on a modern processor will always be pretty close to the real value 0.44721359549995793928183473374626. Yes, the result is off because it is really doing the sqrt of 0.2000000000000000000001, but it is always going to be the same, and is always going to be really pretty dang close to the actual value. (and it will always return the same result, every single time).

With the probability processor, you will still do the sqrt of 0.20000000000000000000001 As the representation will be exactly the same, but instead of being 0.44721359549995793928183473374626 ever time, you might get 0.44721359549995793965468462168163 instead or even greater variance, So saying the results will be more accurate just seems pretty laughable to me.

My statement that computers are very exact machines stands. Given the same input they ALWAYS give the same output, ALWAYS (ok, random fluctuation background radiation bombardment, power fluctuations, and visits from ET might cause the computer to produce different results once every 1000000000000000000000000000000 calculations, but certainly not every time). I never said that computers can represent every number under the sun, you where the one trying to make me say that. (to which I responded, that it doesn't prove my point or disprove it in the least).

I never said the processor was random, infact I said the opposite MORE then once. Read my posts before arguing with me and claiming I know nothing about computers.


By masher2 (blog) on 2/9/2009 2:27:46 PM , Rating: 2
quote:
My statement that computers are very exact machines stands. Given the same input they ALWAYS give the same output,
You're confusing deterministic with probabilistic. Two different concepts.

A probabilistic calculation can be deterministic or not. The former simply means it uses probabilistic algorithms to calculate results...but those results could still be identical for any given set of inputs, no matter how often the calculation is repeated.


RE: Intel beat them by 15 years.
By HollyDOL on 2/9/2009 3:02:51 PM , Rating: 2
Well I ment some sqrt(integer value like 2,3,5,7...)

My point about probability cpu is we don't know how this implementation works. It can internaly work with... who knows 256bit floats (same as Intel CPU FP subcores are 128bit unless I remember wrong).

Given same input always gets the same output? Sadly, they don't have to give you same result.

Let's have a set of real numbers (from math set A and set B are equal, if every x from A is also contained in B and every y from B is also contained in A. No word about order in the definition...). Our task is to sum them up. Now comes the trouble... Given the order you sum them you can get very different results. If you have one very big number and billions of very small numbers, you get the result of being equal to that very big number if you manage to add it somewhere in the beginning causing lots of underflow. Trouble is to get some decent results in summing up float numbers you have to sort the set out and put it in the binary tree so you always add the two smallest numbers (thus making undeterministic set deterministic bin-tree). You can try yourself doing this with ordinary set and with given binary tree. Without using the tree you'll blow yourself up much faster on underflow error. -> computer is NOT exact machine. since in real life math a+b+c+d+e.......+zzzzzzzzz = zzzzzzzzz+.....+d+a+b+c+e; in computer floating point maths, not even close.

You can get such a set for example if you parse several data sources multithreaded.. You can't say in what order your floats come.


By JoeBanana on 2/9/2009 2:08:53 PM , Rating: 2
I agree it can't come out with more exact numbers.

But I agree that it can be used for scientific calculations. Even numerical methods that are used today for computing differentials, integrals, ... have errors in them.

If you measure your height you probably won't give a crap if you are off by 1cm... but on the other hand if you measure some other organ... well the difference matters.

So the lesson of this story is size matters based on what you are measuring. Same goes for scientific calculations.

Another example is using PI. I for example am totally satisfied with 3,1416. But if you want a very precise circumference of a large circle you may wanna use PI with 100 decimals.

It is just a question of how big of an error can you tolerate.

sry 4 crappy Anglesch


RE: Intel beat them by 15 years.
By masher2 (blog) on 2/9/2009 2:19:43 PM , Rating: 2
quote:
Yes, They only take a floating point accurately out to about 20 places... But come on your saying that basically "Well, they aren't exact anyways, so lets make them even less exact!"
More precisely, we're saying something like, "what if we had a more powerful chip that allowed us to represent that floating point value to 40 places instead of 20...but with the last 3 or 4 digits incorrect".

Now, can you see how that would lead to higher accuracy in the results, despite the probabilistic nature of the calculations?


RE: Intel beat them by 15 years.
By Cogman on 2/9/2009 2:27:46 PM , Rating: 2
So you are saying use a 128 bit float representation and throw away the last few bits. Ok, you got me, that could be useful in the case of higher precision floating points. However, its going to require a lot of changes. We are still dragging our feat with 32bit programs (Some even still use 16bit float representations, Yikes). Something like this would be useful with much higher precision floating point calculations. Though, I'll wager that the longer the float the slower the processor goes.

For many 16 or 32 bit floats, the precision is pretty well needed. And for integer calculations its something that I would be completely against, those have to be exact in almost all calculations (Who would have thought that floating point calculations would go faster then integer calculations)


RE: Intel beat them by 15 years.
By HollyDOL on 2/9/2009 2:49:48 PM , Rating: 2
In fact, GPU processors are able to work whole 4x4 Matrix of floats (16/32bit depending on gpu) in one step, so we might even get to be floating point faster than integer... Ofc using normal CPU and use float/double to draw simple lines or circles on the screen would be VERY slow and in general every decent developer avoids using floats wherever he can.


By masher2 (blog) on 2/9/2009 2:52:51 PM , Rating: 2
quote:
Ok, you got me, that could be useful in the case of higher precision floating points. However, its going to require a lot of changes. We are still dragging our feat with 32bit programs
You're confusing the bit size of the architecture with the floating point size. In the 80s, I performed 128-bit FP calculations (via software library) on old 8/16 bit computers, and even today, Intel's 32/64 bit chips support 80 bit FP numbers (128 bit via SSE).

Floating point calculations can be segregated out fairly easily from the main program, and their precision doesn't affect instruction set or address space size. Also, since a program that has a large amount of FP calculations to make will tend to spend most of its time doing just that, there's usually a large 'bang for your buck' payoff in optimizing that.

But again, if you're seeing this as some sort of general purpose CPU, you're not seeing the big picture. This would be primarily useful for offloading certain known functions such as DSP, transcoding, rendering, etc -- not offering this as an alternative to a traditional CPU.


RE: Intel beat them by 15 years.
By Lugaidster on 2/11/2009 10:36:44 PM , Rating: 2
Everyone is confusing precision with accuracy. Both being concepts in statistics. Precise data sets are those that aren't far off between them (2.01, 2.00, 1.99) but precision does nothing by itself if the result you are expecting is very different (1.5 in the example above). Accurate data sets are those get the desired value even if sometimes they are off (1.5, 2.0, 1.3). The latter set would be accurate, while the former would be more precise.

CPUs today are accurate in their calculations, with a varying precision. With a probabilistic chip you would lose some accuracy, not precision.


RE: Intel beat them by 15 years.
By Yojimbo on 2/9/2009 1:16:43 PM , Rating: 2
as computers hold a finite precision, almost all calculations are approximations and have a margin of error already. this increases the margin of error. I wonder if the margin of error using this method could be improved overall compared with traditional methods, though. If you are able to perform more accurate algorithms more quickly there could be a net gain in accuracy.. I don't know much about numerical analysis. Perhaps they can bound the error within a 100% confident interval as it is now. With this method of computig it seems that would not be possible.
But perhaps for computing where you are dealing with confidence intervals anyway, this probabilistic computing would be very useful.


RE: Intel beat them by 15 years.
By JoeBanana on 2/9/2009 2:19:28 PM , Rating: 2
Nice idea. I passed numerical analysis class... Numerical methods(like Runge-Kuttas, Newtons, Eulers...) are closer to the right result with more steps you take. It's an interesting idea and I wonder if for example the chip which didn't compute so exactly and would make four times more steps in the same time would come closer to an exact result.


RE: Intel beat them by 15 years.
By Yojimbo on 2/24/2009 2:41:39 AM , Rating: 2
yeah, exactly! ...err...close enough!


RE: Intel beat them by 15 years.
By masher2 (blog) on 2/9/2009 1:51:55 PM , Rating: 2
quote:
So now, what can you use a device for that you KNOW will give wrong answers? ...Scientific calculations? HA HA no.
There hasn't been a scientific calculation done yet that yielded precisely the correct answer. Ever. There is always a certain degree of error.

Sometimes the error comes from the measurements, sometimes from the calculations themselves, particularly in larger, more complex modeling, when large amounts of simplifying assumptions need to be made to even solve the problem in real time.

You're also forgetting the fact that even current chips have a certain degree of error when performing floating point calculations. The answers are still 'close enough' for most cases...but calculations that require 100% precision don't use them, they will instead use a (much slower) fixed point library.

Despite what you may believe, a probabilistic chip will actually be incredibly useful in many branches of scientific computing. A massive increase in computing power in exchange for a slight increase in error is a tradeoff that anyone in, say fluid dynamics or even stock market simulation would be very happy to make.


RE: Intel beat them by 15 years.
By emboss on 2/9/2009 4:21:14 PM , Rating: 2
quote:
There hasn't been a scientific calculation done yet that yielded precisely the correct answer. Ever. There is always a certain degree of error.


I've written lots of "scientific calculation" code that provides exact answers. Either because the calculation is over finite fields (some QM problems fall into this category), or in the case of one computational geometry problem, the answer was required as part of a mathematical proof so was worthless unless it was exact (this one was really fun, since it had to do calculations using algebraic numbers ...).


By masher2 (blog) on 2/9/2009 6:12:00 PM , Rating: 2
Strictly speaking, mathematics is distinct from science. Most mathematical calculations are exact...however, when you translate from your mathematical model to the real world, the result becomes inexact.

Even when working with Galois fields, if you look at your results closely, you'll see that, once applied to the real world, uncertainty creeps in, if only through the introduction of one or more physical constants, which are taken by measurement.


RE: Intel beat them by 15 years.
By wordsworm on 2/10/2009 12:22:39 AM , Rating: 2
"Computers work so well because they are accurate. Take that away from them and all the sudden you have a worthless machine that can't do squat. "

That's not entirely true at all. Floating point calculations are all about dealing with the fact that a lot of math isn't solvable - that is to say, what exactly is pi? 2/3=.6666...7. Eventually, in a computer, you're going to get to that rounded up 7. Or, conversely, 1/3=.33333...3, eventually rounded down. They're just talking about making it less accurate.


RE: Intel beat them by 15 years.
By dsumanik on 2/10/2009 11:50:08 AM , Rating: 2
I disagree.

This could be used for a gerneral purpose CPU for those applications.

I guarantee the scientists can't predict exactly how and which direction the mars probe will bounce when it lands.

When launching a nuke, or other high explosive missile or projectile in most cases accuracy within a few feet is MORE than enough to obliterate the target.

Changes is midflight windspeed/temp/humididty etc. are more likely to cause a wider margin of error than .0000001 of a calculation.

Perhaps in keeping things accurate to the full 32 or 64 bits, we could keep it to 8 for a "close enough" calc.

34561.02303403450605656 pretty much can be substituted for 34561.023 in any D = RT calc and get comparable and accurate results, and other factors far harder to measure will likely cause a greater margin of error.


RE: Intel beat them by 15 years.
By JonnyDough on 2/11/2009 7:43:35 PM , Rating: 2
You're exactly right. This would only be good for visual calculations and game physics. For computing anything with numbers that needs to be exact this won't work. Besides, what can they really hope to achieve with this? I doubt they'll be able to go much farther than where they'll end up once we hit Moore's Law with manufacturing processes (I hate quoting it). You can only go so small before you're left with nothingness. They're trying to shoot past accuracy and into electron shedding areas. It's kind of silly IMO unless it leads to some better discovery that brings us back to accurate computations.


RE: Intel beat them by 15 years.
By Moishe on 2/9/2009 9:17:03 AM , Rating: 2
Hahaha.... exactly. But hmm... was that chip faster as a tradeoff? POS without speed is just POS.

POS with an increase in speed = awesome!


By MrBlastman on 2/9/2009 11:25:48 AM , Rating: 2
I'm curious... What is the probability of this technology becoming commercially viable? ;)

Okay, all jokes aside, perhaps with video decoding and shading/lighting etc. in graphically intensive programs such as games - I think this has a use. With scientific data or business data, I doubt it does.

Well, wait. I better rephrase that. Businessmen are generally very fond of statics so I'd say that in business it has a 10% probability of being used 69% of the time... that is, statistically speaking.

With a probability generator we can summarily determine the outcome of the future, part of the time! The uses for this are endless. ;)

Isurance companies will love this - they can statistically determine through their probability generator whether an applicant gets a high rate or a low rate. If the applicant contests the viability of the probability result, they can input the data in a probability generator of their own which statistically speaking, given a probable result can only result in a less-than-pure probable outcome of a probability!

Hey, you - you insurance salesman... stop drooling over this technology. I'm calling you out on this now. ;)


RE: Intel beat them by 15 years.
By arazok on 2/9/2009 11:53:10 AM , Rating: 5
This will rock in first person shooters.

“You probably killed [DOA]killaz4”

“You were likely killed by sircampzalotFTW”


RE: Intel beat them by 15 years.
By MrBlastman on 2/9/2009 11:59:11 AM , Rating: 3
"You probably need to spawn more Overlords"

"You might need more vespene gas."


RE: Intel beat them by 15 years.
By freeagle on 2/9/2009 12:51:13 PM , Rating: 5
Your connection might be lost


RE: Intel beat them by 15 years.
By icanhascpu on 2/13/2009 1:29:03 PM , Rating: 2
This would improve Wows RNG greatly!


He's right
By freeagle on 2/9/2009 9:01:56 AM , Rating: 4
quote:
Professor Palem's chips could revolutionize fields such as computer-generated graphics, content streaming


Indeed, if the CPUs produce some noise in the graphical output, the resulting image might actually be more visually appealing to the human eye, because sharp images, produced by current deterministic CPUs, are kind of unnatural to them. For example, we use anti-aliasing techniques in games to blurry the sharp edges of objects, creating a more realistic image, but with a performance hit. These CPUs could naturally produce anti-aliased images with a huge performance gain.

They could be used for many calculations working with Monte Carlo technique, making them several times faster. Ray-tracing could be sped up, maybe even to the level of current raster computations, while resulting in better image quality.

And I could also see a use for simulation of real world processes, as there is always some random noise, which you'd get practically for free with these CPUs.




RE: He's right
By Moishe on 2/9/2009 9:18:55 AM , Rating: 2
So we'll be able to eat our cake too... "Freakin' sweet!"


RE: He's right
By dani31 on 2/9/2009 9:44:27 AM , Rating: 4
I concur. Current graphic technology is limited by its own precision. Probabilistic computing could do things like depth of field and antialiasing by reducing processor workload instead of dramatically increasing it, the same way the human brain does it.


RE: He's right
By afkrotch on 2/9/2009 12:19:23 PM , Rating: 1
Did you read the article? These chips aren't going to naturally incorporate AA/AF. They instead make missing pixels.

quote:
While some artifacting might occur, it would only be a few missed pixels -- the overall image would remain.


I'd imagine enabling AA/AF would just increase the probability of increased missing pixels. Congrats, now your imagine has 1/4 missing pixels, but you're getting 150 fps at 2560x1600 with max AA/AF settings.

Throw on other items that may get offloaded onto the GPU. Like AI/physics. Your teammates now shoot you in the back and when you jump, you sort of fly instead.


RE: He's right
By freeagle on 2/9/2009 12:49:56 PM , Rating: 2
I have read the article. No chip "naturally incorporate" AA/AF, it incorporates tools needed to achieve them. The complete procedure is in the card's firmware/drivers, not just solely in the chip.

And this chip might give you another tools, that could achieve the same/comparable effect, while boosting performance. Instead of scaling up the resolution, resulting in memory usage increase and performance degradation ( you need to compute several times more pixels than with no AA), and then scaling it down and average every pixel, you can use different solution. You could render each pixel several times, meaning the chip would draw it several times "nearly" in the exact position, producing the desired blur. Now, the total amount of pixels drawn might not change, or even be a bit less, but the memory requirements will stay the same and every pixel will be rendered 7 times faster.

The chip is different, so you will need to alter the current techniques to get the desired effects, but you'll get them considerably faster.


RE: He's right
By gstrickler on 2/9/2009 2:10:31 PM , Rating: 3
quote:
Did you read the article? These chips aren't going to naturally incorporate AA/AF. They instead make missing pixels.
Just because Jason Mick used that as an example doesn't mean it wouldn't be useful for AA (or even that it's a "real" example).

This could be extremely useful in audio, photo, or video encoding and/or decoding. The algorithms we use now are deliberately lossy for better compression, so additional slight errors (especially on decoding) would be essentially unnoticeable. Any errors introduced would be less noticeable on audio or video because it's constantly changing anyway so you don't have as much opportunity to identify minor errors.

It can also be very useful in simulations (including some types of scientific calculations). Imagine you have 100,000 scenarios to test. Now, imagine that you can run them on a probabilistic CPU at 7x the speed and while those results might not be accurate enough for a final conclusion, they're close enough that you can eliminate most scenarios and identify just 1%-5% of them that look promising. Then, you run that 1%-5% on a traditional CPU. Total time = 100,000/7 = 14,286 units + 5,000 (verification on traditional CPU) units, or 19,286 units. End result, you get your results in 1/5 the time (and you've used just a fraction of the power as well).


RE: He's right
By mindless1 on 2/9/2009 6:19:29 PM , Rating: 2
Not necessarily more visually appealing, if the noise were uniform and had only slight deviation to the correct values this would be possible, but it could instead be off by quite a lot and you'd see what many overclockers call snow or sparklies which are not at all enhancing the imagery.


RE: He's right
By freeagle on 2/9/2009 8:19:41 PM , Rating: 2
What many overclockers observe is the result of the circuits producing absolutely incorrect values due to overheating, excessive voltage... These chips calculate nearly the value you need


RE: He's right
By mindless1 on 2/12/2009 12:01:03 AM , Rating: 2
yes, putting the wrong pixel in the wrong place is almost right... it did manage to calculate a pixel and place it on-screen, the least of which would be retaining the right values for it.

Almost right in the digital world isn't like in the analog one. In the analog world your senses may not discriminate the difference, but in the digital one future calculations depend on the result of prior ones, not just the color of the pixel but the entire operation of the video card, the least of which is outputting a certain pixel color once all the rest is done while gaming. If we were only talking about the last stage it would be a different matter.


RE: He's right
By freeagle on 2/15/2009 7:45:10 AM , Rating: 2
When you render a frame, you don't take the as input the previous frame, so the errors cannot cumulate and the resulting inaccuracies in the picture would be minor. Only exception I can think of right now is motion blur, where you blend 2 or more previous frames together to achieve the effect. But that's a post processing, you don't store the blur result, but the picture before blur, which is independent of the previous ones.


RE: He's right
By SlyNine on 2/10/2009 2:04:54 AM , Rating: 2
You need to back off the clock speed if you're seeing that, or use another means to cool your card.


RE: He's right
By mindless1 on 2/11/2009 11:56:19 PM , Rating: 2
No kidding, because we want to avoid nearly right calculations and have exactly right ones.


reinventing the wheel
By neo64 on 2/9/09, Rating: 0
RE: reinventing the wheel
By freeagle on 2/9/2009 10:33:30 AM , Rating: 2
But not every process run on computers needs that accuracy. And if you can trade off the unneeded precision for increase in performance, then why not?


RE: reinventing the wheel
By JoeBanana on 2/9/2009 10:57:59 AM , Rating: 2
This is obviously not meant for exact calculations. You have applications where exact results are not necessary like decoding a movie, game graphic processing,...

It is obvious you wouldn't use this in banking, numeric calculations... quite the contrary; banks keep their numbers in BCD number representation to avoid the difference when converting from decimal to binary.


RE: reinventing the wheel
By freeagle on 2/9/2009 11:10:32 AM , Rating: 2
What difference? 10 = 01010 = 0xa; 15 = 01111 = 0xf, 240 = 011110000 = 0xf0.

The numbers are the same, only the base is different. BCD's advantage is that you can easily extract the individual digits. But computing with them is not straightforward, and they give you no extra "precision"


RE: reinventing the wheel
By IGoodwin on 2/9/2009 11:46:44 AM , Rating: 2
You are correct that the format for a fixed numeric format is as accurate as any other. Once a floating point format is adopted, you have tacitly accepted that you are approximating a numeric result to the most significant digits, however many that happens to be.


RE: reinventing the wheel
By JoeBanana on 2/9/2009 12:00:36 PM , Rating: 2
The problem starts when you want to write non-integer values. Imagine you want to remember 0.2. The result in BCD in finite, but in binary it's .001100110011... to infinity. Consequently a system based on BCD representations of decimal fractions avoids errors representing and calculating such values.


RE: reinventing the wheel
By freeagle on 2/9/2009 12:19:11 PM , Rating: 2
Now I see. I forgot the fractions while forgetting that businessmen care for every piece of cent they can have :)


RE: reinventing the wheel
By masher2 (blog) on 2/9/2009 2:03:20 PM , Rating: 2
quote:
quite the contrary; banks keep their numbers in BCD number representation to avoid the difference when converting from decimal to binary.
Some banking software simply stores currency values as integer cents (x100) or tenth-cents (x1000), rather than whole dollars, a solution that is computationally faster than BCD.


By KristopherKubicki (blog) on 2/9/2009 6:24:52 PM , Rating: 2
quote:
i seriously doubt this computing hardware is the way forward, i mean look at us - mankind has been using computers not solely for the purpose of being very fast, its also the infallible accuracy it provides.


This is already being done quite a bit, so I think it's probably a good step forward. We reduce precision in calculations all the time to speed things up. Consider a render app or lossy compression formats -- we're doing it all the time. Specialized signal processors have been doing this for a while.

Obviously, there's limitations, but computers take shortcuts all the time and we still use them. My computer doesn't calculate an infinite number of decimals for pi, but it still can calculate the circumference of a circle much better than anything else I know.

And there will be mistakes. Who remembers the Patriot missile batteries that would miss targets because storing floating point date times would skew (1/60th doesn't fit into decimal that nicely after millions of iterations).

Precision, it turns out, is a very relative thing. We got past a lot of the growing pains decades ago.


Myth of Machine Precision
By Rogueeconomist on 2/9/2009 11:22:27 AM , Rating: 2
It's actually a myth that computers are unerringly accurate. The fact is that for scientific computation they are generally limited in terms of precision by memory space - even with 32-bit or 64-bit precision there can be significant rounding for scientific computation. This occurs because there is a limited amount of memory space to represent most computed numbers. So actually, yes, it doesn't matter even to the scientific community if the number might be off at the tail end, because we have to deal with this problem anyway given limited machine precision. (It is true however that the rounding errors are predictable, therefore, we will get the same imprecise answer twice in a row after running the same program, whereas with this, we might be different imprecise answers... but it's too early to tell)




RE: Myth of Machine Precision
By afkrotch on 2/9/2009 12:26:27 PM , Rating: 1
Rounding at the end is completely different than having inaccuracies in the middle or even the beginning.

This new probabilistic chip calculated Pi as...

5.14342342159265358979323846


RE: Myth of Machine Precision
By PandaBear on 2/9/2009 12:55:00 PM , Rating: 2
In general you don't need to use an imprecise chip for higher performance, all you have to do is to reduce the number of samples to calculate or the calculation depth in the signal processing.

Why bother designing a different chip that runs 4x as fast when you only need to reduce the filter order by 16x, or something like that?


RE: Myth of Machine Precision
By MadMan007 on 2/9/2009 4:43:43 PM , Rating: 2
Multiple runs. This was alluded to in a previous comment but I'd thought of it too. If you could do 7x the runs in the same time, or fewer in much less time, and then compare those results in a statistical fashion it could lead to an overall speed up. This could also be used as an initial screening test to choose a best candidate on which to run an exact test if necessary.


RE: Myth of Machine Precision
By energy1man on 2/10/2009 8:54:52 AM , Rating: 2
Accuracy = how close the values are to the true value
Precision = how closely the values agree to each other, not the true value

If you get the same answers twice in a row or more, then the answer are more precise, though not necessarily accurate.


RE: Myth of Machine Precision
By Schrag4 on 2/10/2009 10:19:35 AM , Rating: 2
I don't agree with your definition of precision. Accuracy is how 'Right' an answer is. Precision is how much detail is provided in the answer.

For instance, if whatever instrument you're recording data with can only measure distances at 1000 foot imcrements, then it would accurately measure one mile as 5000 feet. If its precision was to 100 foot increments then it would accurately measure one mile as 5300 feet. if its precision was 25 foot increments then it would accurately measure one mile as 5275 feet.....

In computer terms, precision of floating point numbers is really just how many bits are used. Not every number can be stored (efficiently anyway). In fact an extremely small percentage of numbers can be. Basically, the only numbers that can be represented exactly are any combination of powers of 2. So, you can represent 1/2 + 1/4 + 1/8 exactly but you can't represent 0.1 exactly (you can get really really close, as close as your precision will allow).


RE: Myth of Machine Precision
By energy1man on 2/10/2009 2:03:05 PM , Rating: 2
A mile which is 5280 feet, could be measured to be 5300 feet by an instrument. This could be from the following series of measurement 5100, 5200, 5300, 5400, 5500, or the following set 5280, 5290, 5300, 5310, 5320. Both sets give an average value of 5300, and both are just as accurate. However the second set of values is more precise, as they more consistently reproduce themselves.

The ideal goal is to have high accuracy and high precision, but one can be had without the other. If one were to have a systemic error, that is consistently reproduced, you could have high precision with less accuracy. IE your machine was not properly calibrated, and shifted all your numbers by 100, giving the following results 5380, 5390, 5400, 5410, 5420. These results would be just as precise as the second set of numbers, but less accurate than either of the first two sets, due to the consistent shift caused by the systemic error.


Should we take some pause?
By SublimeSimplicity on 2/9/2009 9:13:08 AM , Rating: 2
Is anyone else concerned that the inventor looks like the crazy professor on "Fringe"?




RE: Should we take some pause?
By Loveless on 2/9/2009 10:09:55 AM , Rating: 3
Yes, I am very, very concerned about that. That will obviously affect the quality of his work.

His research would be much better if he looked like my dad or like Superman.


RE: Should we take some pause?
By Bubbacub on 2/9/2009 10:50:51 AM , Rating: 5
i'm more concerned that somebody is actually still watching the fringe


Machines will Rule the World
By d0gb0y on 2/9/2009 11:07:34 AM , Rating: 5
So this is how it all starts...

"They'll never turn on us, it's improbable!"




By mindless1 on 2/9/2009 6:23:43 PM , Rating: 5
Hmm. Good point.

Kill = 0
Don't kill = 1

Now which bit was I supposed to remember. I suppose as long as I guess right most of the time it'll work out ok.


I would suggest it as an extra core
By Plugers on 2/9/2009 10:55:32 AM , Rating: 3
if you put one in a cpu as a 2nd, 3rd, 5th, or 9th core you could add substantial computing power, with very little increase in power consumption.

The standard core(s) could handle the calculations needing accuracy and then you could [tag] in code which are acceptable to run with less accuracy.




RE: I would suggest it as an extra core
By afkrotch on 2/9/2009 12:24:13 PM , Rating: 2
Great. Now you just have a more complicated chip to build.


By Plugers on 2/9/2009 1:20:58 PM , Rating: 2
I'm pretty sure the CPU manufacturers could figure it out.
the current CPUs are so simplistic [/sarcasm]


Infinite Improbability Drive
By InsaneGain on 2/9/2009 3:36:13 PM , Rating: 5
Finally, we are on our way to developing the Infinite Improbability Drive!




RE: Infinite Improbability Drive
By Soulchaser on 2/10/2009 1:55:49 AM , Rating: 2
I'm holding out for bistromath.


must pick the applications carefully
By invidious on 2/9/2009 9:01:36 AM , Rating: 2
I do not agree that miss pixels will be acceptable, at least not as a general policy. For things like images and movies even occational miscalculations will likely be noticable. One area that should be able to see emense gains in this are calculating shadows/lighting and physics in video games.

These are areas where localized miscalulations are not noticable and do not cascade into larger abnormalities. They are also two areas that tax video cards emensely. Perhaps this tech could revese the trend of video cards scaling in size as they scale in performance.




RE: must pick the applications carefully
By SublimeSimplicity on 2/9/2009 9:10:33 AM , Rating: 2
Actually when it comes to 3D graphics, this tech could be used for the entire pipeline with some minor software tweaks. For vertex data, you'd want to make sure that you kept the bits that would represent a vertex close to the camera in the high probability area. Points far from the camera can suffer pretty severely without actually changing their on-screen pixel location.

So there would need to be work done to reevaluate how some of the matrix math works, with this probability of error in mind, but it's very doable.

As for pixel (lighting, shadows, etc) a bit off error will actually probably improve the final result. Giving it the film grain error our mind expects to be there.


By emboss on 2/9/2009 11:00:38 AM , Rating: 2
Actually, the real problem is with the pixel depth, not the x-y position. For example, one commonly used technique is to layer decals on a surface by drawing a duplicate of the surface very slightly in front of the real surface. With a low-accuracy z-buffer you can end up with the two surfaces being exactly on top of each other, which causes texture flicker as you move around.

If the calculated vertex position is in fact shifted around slightly due to probabilistic maths, it'll have the same effect and also happen when you're standing still. How bad the effect is depends on how inaccurate the calculations are.

Additionally, even small positioning changes can have a large effect on pixel color. Consider an dimly lit indoor scene, where the walls join exactly to hide the brightly lit outdoors. Randomness in the position would lead to temporary gaps opening up between the polygons, letting in the bright outdoor part. The visual effect of this would be flickering bright lines on the otherwise dark walls.

Similarly for pixel shaders - nowadays they're used for more than just calculating the color of on-screen pixels. Variation in the results from a particular calculation (in particular where a * k < b * k can occur when a > b) can lead to significant changes in pixel color similar to the effects from z-buffer issues.

One solution, without moving away from a unified architecture, would be to have some accurate shaders and some probabilistic shaders. When the software passes the shader to the driver, it specifies if the shader can be run on the probabilistic shaders. The balance between how many probabilistic shaders vs accurate shaders are on a particular GPU would be another tradeoff ATI/NV/etc would have to decide.

One downside of this, from the manufacturer point of view, is that the probabilistic shaders can't be turned into accurate shaders. So for many GPGPU purposes, the computational grunt of the card is dramatically reduced. So instead of just bunging a couple GB RAM onto a gaming card and calling it a GPGPU card (eg: the Firestream or Tesla cards), they'd need to have a completely seperate chip, which would completely kill their economy of scale advantage over the traditional accelerator vendors like ClearSpeed.


what if
By LumbergTech on 2/9/2009 12:53:08 PM , Rating: 2
what if you paired this cpu w/ a general purpose cpu and just used a programming language that allowed you to specify what parts of your program used each cpu? it seems that precision sensitive tasks would go on the general and those that didn't would go on the probablistic




RE: what if
By HollyDOL on 2/9/2009 1:58:09 PM , Rating: 2
I see this as a new generation of numeric coprocessor.


Soooo........
By Yaron on 2/9/2009 2:22:04 PM , Rating: 2
basically we are talking about a lazy ass cpu...

Sounds great! 0__o




RE: Soooo........
By icanhascpu on 2/13/2009 1:42:26 PM , Rating: 2
If anal-necrotic OCD is "normal" to you. Then yes.


WHY?! What is wrong with them!
By CorwinB on 2/10/2009 11:57:25 PM , Rating: 2
This is sort of the opposite of the reason we use computers. In all honesty I barely scratch the surface of my quad core even though I max out graphics on high end games and I render huge video files. There is also some talk about skipping 6, 8, and 10 cores and going directly to 12 cores. Honestly how much computation do we need. We are getting into seriously crazy super computers. Speaking of which would the errors not just get larger with a super computer. This is just dumb if you ask me.




By icanhascpu on 2/13/2009 1:38:39 PM , Rating: 2
YOU are just dumb if you ask me.

Narrow minded little sheep like you need to shut up and let the intelligent people keep giving us cool toys. People like you 20 years ago would of had us still on 286 and playing Wolfenstien as the epitome of graphical gaming prowess..

Please shut the hell up.

If ANYONE thinks their system is top sh!t, try 50v50 Wintergrasp with all settings max. The best systems out there cannot get passed 50fps, and that game is going on five years old.


By Casual Observer on 2/9/2009 9:09:38 AM , Rating: 1
I refer you to: http://en.wikipedia.org/wiki/Infinite_monkey_theor...

Substitute "probabilistic processor" for the word "monkey".




By Zshazz on 2/12/2009 2:32:50 PM , Rating: 2
quote:
Not only did the probabilistic processors produce nothing but five pages[31] consisting largely of the letter S, the lead male began by bashing the keyboard with a stone, and the probabilistic processors continued by urinating and defecating on it.


Great, now we have to worry about our computers pooping and peeing all over themselves


Postmodernist CPUs.
By Ordr on 2/9/2009 9:12:36 AM , Rating: 3
Great.




separate
By wvh on 2/9/2009 11:32:23 AM , Rating: 2
With these chips, it will become immensely important to make a distinction between "precise" and "imprecise" goals for computer hardware. As mentioned by others in this thread, these chips could be great for some secondary computations like anti-aliasing, shadows, model prediction, fractal computations perhaps. But we're in for a nasty ride if the results of these computations are somehow mixed or confused with data from "precise" chips when the purpose is exact numbers. Somehow, we will end up with 2 CPUs (and likely different addressing and data pathways) with different purposes where it will become extremely important not to confuse/mix data obtained from the "imprecise" source with the data coming from the traditional CPU.

It will take a long time before we can use spreadsheets where different functions use different CPUs and their respective computation modes... You (and the software developers) would have to know very well what the effects and benefits will be of using exact versus probabilistic computations for different purposes such as for example exact engineering versus chaotic market predictions.

Perhaps we will end up with a CPU, FPU and this new "PPU", with very clear distinctions in and boundaries to their application.




By Motoman on 2/9/2009 1:14:39 PM , Rating: 2
...such as GPU and audio functions. I would have no qualms at all having a probabalistic GPU or audio processor. Just don't give me a probabalistic CPU. I don't think that's acceptable.




Old news!
By omgwtf8888 on 2/9/2009 2:35:05 PM , Rating: 2
The people on wall street have been using the whatever, close enough, processor for ever! Banks too! Sure with a $15k per year job you qualify to buy a $400,000 house... Close enough! Seems the federal government used these on the first bail out... $350 billion... where did it go? We can account for about 10 bucks... Ah! that's close enough.. We should employ these to calculate the debt owed to the Chinese... hmmm a gazillion dollar/yuan^3/7*X^y -2 = we owe them $10...




Programmer’s dream come true
By unrated on 2/9/2009 3:11:46 PM , Rating: 2
“No Mr. Project Manager, it isn’t supposed to happen that way. But it isn’t my fault, you just got unlucky.”




Penguins
By Shmak on 2/9/2009 5:09:36 PM , Rating: 2
quote:
Please do not be alarmed," it said, "by anything you see or hear around you. You are bound to feel some initial ill effects as you have been rescued from certain death at an improbability level of two to the power two hundred and seventy-six thousand to one against – possibly much higher. We are now cruising at a level of two to the power of twenty-five thousand to one against and falling, and we will be restoring normality just as soon as we are sure of what is normal anyway.




CPU
By JazzMang on 2/9/2009 6:21:52 PM , Rating: 2
"My CPU is a NeuralNet processor, a learning computer"




probabilistic chips
By frozentundra123456 on 2/10/2009 8:15:03 AM , Rating: 2
Wouldn't the errors also tend to cancel each other out. The probability that the errors would all be in the same direction would be quite small. Perhaps some algorithims
could be written that would take an "average" value that would cancel out most of the errors.




Mandelbrot set
By lemonadesoda on 2/10/2009 3:49:52 PM , Rating: 2
A probabilistic CPU would struggle to calculate one of these for obvious reasons: http://en.wikipedia.org/wiki/File:Mandel_zoom_00_m...

... but would do a great job at calculating a bankers bonus!




By GotchaLookin on 2/12/2009 12:58:58 PM , Rating: 2
Q: Anyone else having a "birth of Terminator" moment?

Fred Saberhagen's "Berserker" machines depended on a random number generator to make critical decisions, especially to make them unpredictable in combat strategy and tactics.

Imagine, if you will, PCMOS installed into targetting hardware for the next generation Predator.

"Heads I win, tails you lose!"

This should remind all of us how important it is for classic science fiction to be read by all engineers.




If this thing is so advanced...
By goku on 2/13/2009 6:06:09 AM , Rating: 2
The least this "advanced" processor could do is tell us the probability of this making it to market and then actually succeeding.




These are not final products
By FNG on 2/13/2009 10:00:59 AM , Rating: 2
1/30th of the power and 7x speedup on certain calculations and they have not even been handed off to engineers. Give these to some chip engineers and they will crank out even more "speed". This will translate into chips that can statistically place the pixels in the right spots and still economize greatly over current processes.




By lemonadesoda on 2/16/2009 12:36:00 PM , Rating: 2
If I have an "accurate" double-precision calculation, or a 128-bit calculation, I can do the following:

1./ Do the calculation in single precision, or 64bit or 32 bit.

2./ In half the time or less (or twice as many calculations in the same time... depending on your opcode). Actually, for add or subtract the time is about half, but for mult or div the time is one quarter OR significantly shorter (math calc time doesnt scale linearly with precision for non-trivial operations). Let's take an average of one quarter.

3./ So I therefore do the same calculations (with a bit less accuracy) in quarter of the time, which is also means one quarter of the power is needed.

Since I can "estimate" the inaccuracy of lower precision math I can probabilitically determine its accuracy.

So what's new?




"DailyTech is the best kept secret on the Internet." -- Larry Barber














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki