backtop


Print 60 comment(s) - last by Integral9.. on Sep 29 at 7:55 AM


Working silicon of Larrabee, Intel's upcoming 2010 discrete GPU was shown running a ray-traced scene from id Software's Enemy Territory: Quake Wars. The GPU will become the only desktop GPU based on an x86 architecture. Intel brags of its efficiency, saying the water effects (reflections, transparency, etc.) were accomplished with only 10 lines of C++ code.  (Source: CNET/YouTube)
Intel prepares to jump into the discrete graphics market

In the 1990s a plethora of graphics chip makers rose and fell.  Eventually only two were left standing -- NVIDIA and ATI.  ATI would later be acquired by AMD.  Though AMD was in a fierce battle with Intel in the multicore era, it also managed to score some points in the last generation graphics war with its 4000 series cards (RV770). 

Before this happened a third player, Intel, had attacked the market and quietly built up a dominant marketshare in low-end graphics. Its integrated GPUs popped up on netbooks, low end laptops, and even a number of low-end desktops.  Now the graphics market seems poised to flip yet again.  NVIDIA is on the verge of seizing much of Intel's low-end and mobile marketshare, thanks to its Ion platform and new mobile GPUs.  And AMD (ATI) looks poised to attack in the latest round of the discrete graphics wars with aggressively priced DirectX 11 GPUs.  That leaves Intel, which is preparing to take on NVIDIA and AMD with a brand new discrete graphics offering called Larrabee.

First revealed at SIGGRAPH in August of 2008, Larrabee is now reality.  Intel has just shown off working models of Larrabee GPUs at its Intel Developers Forum in San Francisco.

Built on a multicore die-shrink of Intel's Pentium 54C architecture, the powerful new graphics chip was able to render a ray-traced scene from the id Software game, Enemy Territory: Quake Wars, with ease.  Ray-tracing is an advanced technique that has long been touted as an eventual replacement to rasterization in video games.  Currently it is used for the high quality 3D animation found in many films.

The demo also gave a peak at Gulftown, a 6 core, Core i9 processor. Built on the new upcoming Westmere architecture (a Nehalem die shrink), Gulftown is the "extreme edition" counterpart of Clarkdale (desktop) and Arrandale (laptop). 

Larrabee, however received the most interest.  The working chip was said to be on par with NVIDIA's GTX 285.  With AMD's latest offerings trouncing the GTX 285 in benchmarks, the real question will be the power envelope, how many Larrabees Intel can squeeze on a graphics board, what kind of Crossfire/SLI-esque scheme it can devise, and most importantly the price.

A direct comparison between NVIDIA's GTX 285 and Larrabee is somewhat misleading, though, because Larrabee is unique in several ways.  First Larrabee supports x86 instructions.  Secondly, it uses tile-based rendering to accomplish task like z-buffering, clipping, and blending that its competitors do in hardware, with software instead (Microsoft's Xbox 360 works this way too).  Third, all of its cores have cache coherency.  All of these features stack up to (in theory) make Larrabee easier to program games for than NVIDIA and AMD's discrete offerings.

The GPU also still has some time to grow and be fine tuned. It's not expected until the first half of 2010.  Intel is just now lining up board partners for the new card, so it should be interesting to see which companies jump on the bandwagon.

At IDF Intel also gave hints at the rest of its upcoming graphics plans.  It says that it's preparing to take a "big jump ahead" in integrated graphics with an upcoming product to be released next year, which will compete with the NVIDIA Ion.  Intel is also preparing a system-on-a-chip package with Larrabee and an Intel x86 processor, which it is using to target the mobile and handheld devices market. 

It's unclear, though, how low Intel can push the embedded Larrabee's power envelope or when the SoC will arrive.  It did make it clear, though, that it is targeting the chip for "non-PC" applications.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By Mitch101 on 9/24/2009 12:32:43 PM , Rating: 2
I'm pretty sure they wont be able to get RayTracing to any acceptable frame rate to complete with Rasterization today but RayTracing is the future of gaming. Pretty cool to see that its down the road and around the corner.

Intel competing in Rasterization is slim but they have deep pockets, are firing on all cylinders lately, and dont count out the x-factor of being the leader in chip fabrication. NVIDIA and AMD tend to stumble in manufacturing where Intel excels.

If they do get a leg up in Raytracing over the competition it could be enough to peak the interest of places with render farms.

On a far away note NVIDIA should be a bit worried if Intel is able to get some developer wins with an x86 based graphics card and they should get them even if the chip is used as sort of a co-processor to the CPU. NVIDIA doesn't have an x86 license and I doubt Intel would grant one. But if they did how long and how much would it take NVIDIA to develop and x86 based chip worth buying?


By freaqie on 9/24/2009 1:49:48 PM , Rating: 2
actually they probably will.
if you have a quadcore check out arauna realtime raytracing.
it does about 40 fps on a decent quadcore at reasonable resolution (1024 or something)

taking the fact into account that larrabee will have a multitude more cores and it can handle 16 vectors....
it should defenitaly be able to do so.

http://igad.nhtv.nl/~bikker/


By FITCamaro on 9/24/2009 3:22:58 PM , Rating: 1
A core on Larrabee is not equal to one of the cores on a Core 2 Duo or Core i5/7 quad core. They are far less powerful.


By freaqie on 9/24/2009 4:48:09 PM , Rating: 4
and built specifically for raytracing...

it can process 16 vectors per clock per core.
a I7 can do 4. so that is 4 times more efficient.
also it has way more cores.
raystracing scales almost linearly with the amount of cores.

as for clockspeed.. well that is relatively unknown.
but:

intel clamed larrabee would do 2 teraflops.
and larrabee (without die shrink ) would have 32 cores.
so 32 cores x 16 single-precision float SIMD per core x 2GHz per core = 2 TFLOPS

if i'm correct that would imply larrabee is 2ghz.
larrabee can do 16 vectors per core per clock

2 ghz is 2.000.000.000 hertz
if so 32x16x2 000 000 000= 1 024 000 000 000.......

i assume i am doing something wrong here.
because if it's theoretical performance would be:

1 024 000 000 000 vectors per second...

that is a lot...
it will suck in a lot of other things.
but it's good in all things required for raytracing.

and NO way a I7 can ever do that many rays per second.

ofcourse this is theoretcial peak performance
but if we even ake 50 or 25% of that.
it is still way faster then any quadcore.

but, assuming it meets the projections...

according to intel this thing was working on 10% of the power.
i don't buy that. but 20-25% i assume possible.
that means the performance (this version ran the raytracer at about the speed of a 16core core 2 based xeon system.

so it will then end up being as fast as 64 Xeon core 2 quad cores.

and that is not too shabby i think...
again i am just speculating... but don;t write it off just yet.. it was Ax silicon.. not the just taped out B0


By freaqie on 9/24/2009 4:50:17 PM , Rating: 2
i mean 64 cores in total...
so 64 cores in total, in a core 2 quad based system


By jconan on 9/26/2009 9:30:53 PM , Rating: 2
does http://www.nvidia.com/object/siggraph_2009.html Nvidia's implementation of raytracing use the cpu or is heavily gpu based in comparison to intel's?


"When an individual makes a copy of a song for himself, I suppose we can say he stole a song." -- Sony BMG attorney Jennifer Pariser

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki