backtop


Print 21 comment(s) - last by Regs.. on Mar 21 at 12:04 PM


A visual representation of a CPU using nanophotonic switches to direct light speed traffic between cores on a chip.  (Source: IBM)
IBM continues push towards light speed computing

In 2005 International Business Machines demonstrated silicon nanophotonic devices that slowed down light via clever reflecting methods.  Then in 2006 it showed how a similar device could be used to buffer a full byte of information in stored optical pulses, a necessity for optical computing. 

In December 2007, researchers demonstrated a silicon device that transformed energy from electronic impulses from a processor into light pulses -- the pulses in the 2005 and 2006 developments were manually triggered.

Now, IBM has made another significant research breakthrough in its quest to achieve optical computing.  Researchers at IBM have announced that they have developed the world's smallest nanophotonic switch.  The tiny silicon device, measuring a mere hundredth of the size of a human hair, is an essential step towards creating "light transistors," that allow chips to process at the speed of light.  Such chips could greatly outclass today's chips in performance and speed, as today's chips rely on slower electrical communication through copper wires.

Before such chips, though, an important stepping stone IBM sees is the use of its optical technologies in on chip networks.  The optical devices could be put to use as a bus between cores in IBM's next generation Cell processors or other multicore systems.  Yurii Vlasov, manager of silicon nanophotonics at IBM’s TJ Watson Research Center states, "This new development is a critical addition in the quest to build an on-chip optical network."

IBM's partially DARPA-funded research is published in the April 2008 journal
Nature Photonics, titled "High-throughput silicon nanophotonic wavelength-insensitive switch for on-chip optical networks."  The switch takes light converted by IBM's other devices from electrical signals and can direct it along multiple routes.  This will eventually allow on-chip optical interconnects.  IBM states that the device fulfills several key criteria making it ideal for use on-chip.  First, they state, it is extremely compact.  Secondly, it can process multiple wavelengths of light (known as colors in the visible spectra).  This means that with each wavelength transmitting data at up to 40 Gb/s, the aggregated bandwidth can exceed 1 Tb/s. 

IBM has shown the device is tolerant to heat.  This is an essential characteristic for successful on-chip deployment as modern processors develop "hot spots" which migrate around, based on changing loading conditions.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Technically ...
By Xodus Maximus on 3/18/2008 2:52:30 PM , Rating: 3
quote:
creating "light transistors," that allow chips to process at the speed of light


Technically electrons and photons travel at the same "speed of light". However resistance of the material they are traveling through slows that down, same thing may happen with the "reflectivity tricks" that IBM is using. So in the end the processing speed may be the same.

However it may not generate as much heat, so speed race may be back on...5Ghz, 100Ghz, and maybe 1Thz hopefully in my lifetime (doubtful, but hopefully =)




RE: Technically ...
By Martimus on 3/18/2008 3:14:59 PM , Rating: 3
The biggest plus is that light isn't affected by reluctance like the electrical signals. This means that you don't need to keep shrinking the chip to get it to work, so much larger dies will be possible.


RE: Technically ...
By JasonMick (blog) on 3/18/2008 3:21:18 PM , Rating: 2
The phrase "light speed" is ambiguous as it is material dependent. Light and electricity travel at the same speed in a vacuum (the empirical "speed of light"), but at different speeds from each other in various media.

The really big deal here is the added bandwidth. Light is much better bandwidthwise that electricity via copper wire.

However, you accurately point out an added benefit-- the lower heat production could also allow higher clocked processors.


RE: Technically ...
By MozeeToby on 3/18/2008 3:22:11 PM , Rating: 2
Technically speaking electrons don't travel anywhere near the speed of light. In fact, in a modern low voltage processor they probably only move at a fraction of a millimeter per second.

The electric field, however, is what actually carries the information. I'm not sure what the value is for silicon but for copper the field transmits at about 2/3 the speed of light. Enough to improve performance, but probably not ground breaking in and of itself.

As you suggest, the real advantage is in heat savings. Modern processors generate most of their heat not in the transistors, but in the links between them. A lot of the power used by a chip is actually used to charge the capicitence of the millions of circuits between the transistors.


RE: Technically ...
By geddarkstorm on 3/18/2008 3:48:21 PM , Rating: 3
Correct me if I'm wrong:

Electrons can never travel at the speed of light under current theory, that's impossible due to the theory of relativity http://en.wikipedia.org/wiki/Electron . Electrons have mass, therefore they'd require infinite energy to reach c in a vacuum.

Light itself is a self propagating electric field with a perpendicular magnetic field. So indeed light and electrons are affected differently by different materials. Changes in the movement speed of electrons creates light, or low wavelengths like infrared, aka heat (in a synchrotron, or particle accelerator, changing the speed of an electron can create much higher wavelengths like X-rays due to the electron's immense speed, and therefore is useful in X-ray crystallography). Whereas changes in the speed of light is usually due to the different densities and refractive qualities of materials and so doesn't really create heat unless absorbed by atoms; and then the heat produced will be proportional to the amplitude of the light. So as long as you can detect it, you can have almost no heat production by transmitting and receiving light by using very small amplitudes and pulse lengths.

Light can move faster, but far more importantly, light can carry a near infinite range of information via wavelength and phase differences. You're limited only by how much you can detect and decipher reliably. So, a light transistor could do many of calculations at once theoretically by having many separate wavelength/phase channels, as long as said channels didn't interact of course.


RE: Technically ...
By rangerdavid on 3/18/08, Rating: 0
RE: Technically ...
By Regs on 3/21/2008 12:04:01 PM , Rating: 2
It's very interesting in what you say in your 2nd paragraph. Which makes more disappointed in what IBM has has in mind for the application of optical computing.

"The optical devices could be put to use as a bus between cores in IBM's next generation Cell processors". This to me seems like a waste of innovation if we were only to implicate it as a bus between two processing cores. Now to have information transfer inside the data execution units of the processor core itself using optics would be truly remarkable.


RE: Technically ...
By StormEffect on 3/18/2008 9:59:00 PM , Rating: 2
This seems like a fantastic technology, but how does it hold up against physical processes other than heat, at least compared to current copper interconnects?

Is the system as physically robust as a copper interconnect? How does it fare against vibration or bending?


RE: Technically ...
By phxfreddy on 3/18/2008 11:35:42 PM , Rating: 2
Electrons do not travel at the speed of light. They go at drift speed or about 300 mph in a household wiring circuit. Its the wave that travels at near the speed of light ....why? because it is light.


RE: Technically ...
By AnnihilatorX on 3/19/2008 8:18:37 AM , Rating: 2
no. the drift velocity of electrons in a copper wire is on the order of millimeters per second

That's definitely not 300mph.


RE: Technically ...
By tjr508 on 3/19/2008 12:10:32 AM , Rating: 2
Wrong. Wrong. Wrong.

The EM waves through a CPU DO (for all real world purposes) travel at the speed of light.

Resistance does NOT slow down EM waves.

Maybe you are referring to the internal capacitance in a MOSFET (or FET in general since metal isn't exactly necessary anymore) that slows down the rise in magnitude in a signal and thus the small wait for a signal to reach a threshold or switching voltage.


RE: Technically ...
By aliasa on 3/19/2008 2:46:45 PM , Rating: 2
well i beg to differ from that notion cause the theory of relativity prohibits any mass from moving at v=c,coz m will tend to infinity.electrons move with drift velocity given by
v=Anevt

which is very very small as compared to c.


Light speed is too slow
By theslug on 3/18/2008 2:53:20 PM , Rating: 5
We're going to have to go right to...ludicrous speed.




RE: Light speed is too slow
By bhieb on 3/18/2008 4:41:53 PM , Rating: 3
what the hell was that...it's IBM sir...they've gone plad


By kattanna on 3/18/2008 4:06:42 PM , Rating: 3
using a single optical path, but using various wavelengths to hold each seperate bit in a byte/word/etc, could really free up layout of the various interconnects as well. so instead of having to lay out 64 wires to carry 64 bits of data, 1 optical route using 64 wavelengths could carry the same data.

that would seriously reduce complexity. but why stop there...

why not make the chip "speak/listen" to the outside world via a universal optical bus interconnect that would then connect CPU(s), main memory, chipset, and expansion slots all on 1 optical bus.




By Clauzii on 3/18/2008 10:27:28 PM , Rating: 2
One step closer to the "Optical Soup Computer", where a whole pond of different waves sit across each other. Then deliver datapackets (made of light) delivered at points of crossing and the right wavelength at the right moment. Being able to maybe even have different focuslengths, and positioned by ultra-presice stepper motors. Say, 64 lasers in a CPU chamber.

Pure fireworx :))


By pnyffeler on 3/20/2008 2:00:21 PM , Rating: 2
Why stop there? You could have a whole bunch of users using the computer at the same time, with each user having a bundle of wavelengths of their own. Hell, you could have one computer for an entire building, with everybody having fiber optic connections to the Big Blue Box in the basement. It all comes down to how much resolution between wavelengths the system can handle, and, therefore, how many different bandwidths it can utilize.


hmmmm
By inperfectdarkness on 3/18/08, Rating: -1
RE: hmmmm
By amanojaku on 3/18/2008 2:41:24 PM , Rating: 2
I doubt nVidia will be the first to license this. Technology such as this is generally expensive; currenty graphics cards owners might not mind paying $300-400 on their cards, but try $1,500 and up!

I see specialized CPUs (high-end routers and supercomputers,) then the business systems (UltraSPARC T4 or POWER8 servers.) One day someone will make the breakthrough powerful yet PC-priced CPU 15 years from now. Maybe 10.


RE: hmmmm
By AntiM on 3/18/2008 2:44:19 PM , Rating: 2
When a photonic switch can replace a transistor and use different wavelengths of light to represent different states, other than the 1s and 0s of todays transistors, then we'll really be getting somewhere.


RE: hmmmm
By smilingcrow on 3/19/2008 11:16:56 PM , Rating: 2
Sounds like good technology to feature in the upcoming film, Terminator 4 – The Revenge of Big Blue. It would sure help with the render times as well.


"If you can find a PS3 anywhere in North America that's been on shelves for more than five minutes, I'll give you 1,200 bucks for it." -- SCEA President Jack Tretton

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki