backtop


Print 32 comment(s) - last by verndewd.. on Feb 13 at 4:05 PM


Intel's Teraflops Research Chip runs on an LGA socket at 62W
Six months after its initial debut, Intel sheds more light on the massively parallel teraflop-on-a-chip project

With no Spring Intel Developer Forum in the U.S., Intel is showing off its newest technologies this week at the annual Integrated Solid State Circuits Conference (ISSCC) in San Francisco. At the forefront of Intel’s announcements is its success in developing the world’s first 80-core processor, currently presented at ISSCC.

Intel’s chief technology officer, Justin Rattner, states "Our researchers have achieved a wonderful and key milestone in terms of being able to drive multi-core and parallel computing performance forward. It points the way to the near future when Teraflop-capable designs will be commonplace and will reshape what we can all expect from our computers and the Internet at home and in the office."

The project until now was previously dubbed Tera-scale at public Intel events.  The proper name is now the Intel Teraflops Research Chip -- alluding to the fact the processor can achieve one trillion FLoating-point Operations Per Second. Tera-scale made its first appearance during the Fall 2007 Intel Developer Forum in September 2006.  The ISSCC agenda published last month shed more details on the architecture, but this past weekend Intel pulled out most of the stops.

The Teraflops Research Chip is composed of a total of 80 independent processing cores, which Intel refers to as tiles. The tiles are organized in rectangular fashion, with 8 tiles placed across and 10 down, adding up to a total of 80.

Each individual tile in the chip features a processing engine (PE) and a 5-port router. The router passes data and instructions to other tiles, while the processing engine, as the name indicates, processes data. To save power, each processing engine can power down independently of its router, meaning that a tile can theoretically only be used to pass data when its processing engine is not needed. The processing engine can then be turned on to process additional data on-demand.  Intel's guidance claims the processor can achieve one teraflops performance on just 62W of power.

The chip itself uses an LGA package similar to Intel’s Core 2 and Pentium 4 processors. A clear difference, however, is that is uses 1248 pins in place of 775. Intel's guidance states that 343 pins are used for signaling, while the rest are used for power and ground.

The minimum clock speed the chip needs to run at in order to process one teraflop is 3.16 GHz per core at 0.95V, but Intel's guidance already alludes at frequencies in excess of 5.7 GHz.  Performance, at this time, appears linear; a 5.7 GHz Teraflops Research Chip has an output of 1.81 teraflops.

Intel has large plans for its Teraflops Research Chip. The primary purpose of the chip and project is less to make record performances, and more to serve as a vessel to test future Intel technologies. The next major technologies Intel will be implementing in the Tera-scale research project will be 3-D stacked memory and introducing more general purpose and capable cores.

A major limitation of the current 80-core chip is that it is not based on the X86 architecture. Instead, it uses a 96-bit Very Long Instruction Word (VLIW) architecture, another architecture currently used in the Itanium server processors. A major hurdle that Intel hinted at will be moving from VLIW to X86 on its 80-core chip.

Although Intel currently has no plans to commercialize the 80-core chip, technologies used in it will definitely be making their way into multi-core desktop chips. So how long until the technologies are expected to finally come to fruition? Intel estimates that it will take 5 – 10 years until we actually begin seeing the benefits of the Tera-scale research project.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

I like these projects..
By DeepBlue1975 on 2/12/2007 3:13:37 PM , Rating: 2
Just from the R&D point of view.
I think the key to exploit this kind of design has not as much to do with today's applications as with future ones:
Imagine something like "a neural net on a chip" or other types of applications we don't have today simply because we don't have the technology for them, and a design like this could be a door opener.
Here I think it's all about massive parallelism, not just raw performance.
I guess a "perfect" natural speech recognition app, one that CAN stand for you to talk while having quite a bit of ambient noise and other people talking, could benefit so much more from a "networked CPU" which behaves really more like a brain than a simple number cruncher.




"The Space Elevator will be built about 50 years after everyone stops laughing" -- Sir Arthur C. Clarke

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki