backtop


Print 60 comment(s) - last by Integral9.. on Sep 29 at 7:55 AM


Working silicon of Larrabee, Intel's upcoming 2010 discrete GPU was shown running a ray-traced scene from id Software's Enemy Territory: Quake Wars. The GPU will become the only desktop GPU based on an x86 architecture. Intel brags of its efficiency, saying the water effects (reflections, transparency, etc.) were accomplished with only 10 lines of C++ code.  (Source: CNET/YouTube)
Intel prepares to jump into the discrete graphics market

In the 1990s a plethora of graphics chip makers rose and fell.  Eventually only two were left standing -- NVIDIA and ATI.  ATI would later be acquired by AMD.  Though AMD was in a fierce battle with Intel in the multicore era, it also managed to score some points in the last generation graphics war with its 4000 series cards (RV770). 

Before this happened a third player, Intel, had attacked the market and quietly built up a dominant marketshare in low-end graphics. Its integrated GPUs popped up on netbooks, low end laptops, and even a number of low-end desktops.  Now the graphics market seems poised to flip yet again.  NVIDIA is on the verge of seizing much of Intel's low-end and mobile marketshare, thanks to its Ion platform and new mobile GPUs.  And AMD (ATI) looks poised to attack in the latest round of the discrete graphics wars with aggressively priced DirectX 11 GPUs.  That leaves Intel, which is preparing to take on NVIDIA and AMD with a brand new discrete graphics offering called Larrabee.

First revealed at SIGGRAPH in August of 2008, Larrabee is now reality.  Intel has just shown off working models of Larrabee GPUs at its Intel Developers Forum in San Francisco.

Built on a multicore die-shrink of Intel's Pentium 54C architecture, the powerful new graphics chip was able to render a ray-traced scene from the id Software game, Enemy Territory: Quake Wars, with ease.  Ray-tracing is an advanced technique that has long been touted as an eventual replacement to rasterization in video games.  Currently it is used for the high quality 3D animation found in many films.

The demo also gave a peak at Gulftown, a 6 core, Core i9 processor. Built on the new upcoming Westmere architecture (a Nehalem die shrink), Gulftown is the "extreme edition" counterpart of Clarkdale (desktop) and Arrandale (laptop). 

Larrabee, however received the most interest.  The working chip was said to be on par with NVIDIA's GTX 285.  With AMD's latest offerings trouncing the GTX 285 in benchmarks, the real question will be the power envelope, how many Larrabees Intel can squeeze on a graphics board, what kind of Crossfire/SLI-esque scheme it can devise, and most importantly the price.

A direct comparison between NVIDIA's GTX 285 and Larrabee is somewhat misleading, though, because Larrabee is unique in several ways.  First Larrabee supports x86 instructions.  Secondly, it uses tile-based rendering to accomplish task like z-buffering, clipping, and blending that its competitors do in hardware, with software instead (Microsoft's Xbox 360 works this way too).  Third, all of its cores have cache coherency.  All of these features stack up to (in theory) make Larrabee easier to program games for than NVIDIA and AMD's discrete offerings.

The GPU also still has some time to grow and be fine tuned. It's not expected until the first half of 2010.  Intel is just now lining up board partners for the new card, so it should be interesting to see which companies jump on the bandwagon.

At IDF Intel also gave hints at the rest of its upcoming graphics plans.  It says that it's preparing to take a "big jump ahead" in integrated graphics with an upcoming product to be released next year, which will compete with the NVIDIA Ion.  Intel is also preparing a system-on-a-chip package with Larrabee and an Intel x86 processor, which it is using to target the mobile and handheld devices market. 

It's unclear, though, how low Intel can push the embedded Larrabee's power envelope or when the SoC will arrive.  It did make it clear, though, that it is targeting the chip for "non-PC" applications.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By stmok on 9/24/2009 9:32:11 AM , Rating: 2
Agreed. Ray-tracing requires huge computing power. Whether Larrabee hardware can do it with acceptable frame rate is another story.

Larrabee can also do rasterization. Its graphics stack is software based.


By therealnickdanger on 9/24/2009 10:01:21 AM , Rating: 4
True, enough about the framerate. I don't think that the Larrabee "demo" was moving, they just showed pre-rendered still images of what it could do if it was functioning. Anand's article state's the following:

quote:
The demo was nothing more than proof of functionality , but Sean Maloney did officially confirm that Larrabee's architecture would eventually be integrated into a desktop CPU at some point in the future ... Larrabee rendered that image above using raytracing, it's not running anywhere near full performance.

http://www.anandtech.com/cpuchipsets/showdoc.aspx?...


By Mitch101 on 9/24/2009 12:32:43 PM , Rating: 2
I'm pretty sure they wont be able to get RayTracing to any acceptable frame rate to complete with Rasterization today but RayTracing is the future of gaming. Pretty cool to see that its down the road and around the corner.

Intel competing in Rasterization is slim but they have deep pockets, are firing on all cylinders lately, and dont count out the x-factor of being the leader in chip fabrication. NVIDIA and AMD tend to stumble in manufacturing where Intel excels.

If they do get a leg up in Raytracing over the competition it could be enough to peak the interest of places with render farms.

On a far away note NVIDIA should be a bit worried if Intel is able to get some developer wins with an x86 based graphics card and they should get them even if the chip is used as sort of a co-processor to the CPU. NVIDIA doesn't have an x86 license and I doubt Intel would grant one. But if they did how long and how much would it take NVIDIA to develop and x86 based chip worth buying?


By freaqie on 9/24/2009 1:49:48 PM , Rating: 2
actually they probably will.
if you have a quadcore check out arauna realtime raytracing.
it does about 40 fps on a decent quadcore at reasonable resolution (1024 or something)

taking the fact into account that larrabee will have a multitude more cores and it can handle 16 vectors....
it should defenitaly be able to do so.

http://igad.nhtv.nl/~bikker/


By FITCamaro on 9/24/2009 3:22:58 PM , Rating: 1
A core on Larrabee is not equal to one of the cores on a Core 2 Duo or Core i5/7 quad core. They are far less powerful.


By freaqie on 9/24/2009 4:48:09 PM , Rating: 4
and built specifically for raytracing...

it can process 16 vectors per clock per core.
a I7 can do 4. so that is 4 times more efficient.
also it has way more cores.
raystracing scales almost linearly with the amount of cores.

as for clockspeed.. well that is relatively unknown.
but:

intel clamed larrabee would do 2 teraflops.
and larrabee (without die shrink ) would have 32 cores.
so 32 cores x 16 single-precision float SIMD per core x 2GHz per core = 2 TFLOPS

if i'm correct that would imply larrabee is 2ghz.
larrabee can do 16 vectors per core per clock

2 ghz is 2.000.000.000 hertz
if so 32x16x2 000 000 000= 1 024 000 000 000.......

i assume i am doing something wrong here.
because if it's theoretical performance would be:

1 024 000 000 000 vectors per second...

that is a lot...
it will suck in a lot of other things.
but it's good in all things required for raytracing.

and NO way a I7 can ever do that many rays per second.

ofcourse this is theoretcial peak performance
but if we even ake 50 or 25% of that.
it is still way faster then any quadcore.

but, assuming it meets the projections...

according to intel this thing was working on 10% of the power.
i don't buy that. but 20-25% i assume possible.
that means the performance (this version ran the raytracer at about the speed of a 16core core 2 based xeon system.

so it will then end up being as fast as 64 Xeon core 2 quad cores.

and that is not too shabby i think...
again i am just speculating... but don;t write it off just yet.. it was Ax silicon.. not the just taped out B0


By freaqie on 9/24/2009 4:50:17 PM , Rating: 2
i mean 64 cores in total...
so 64 cores in total, in a core 2 quad based system


By jconan on 9/26/2009 9:30:53 PM , Rating: 2
does http://www.nvidia.com/object/siggraph_2009.html Nvidia's implementation of raytracing use the cpu or is heavily gpu based in comparison to intel's?


By conquistadorst on 9/24/2009 4:52:57 PM , Rating: 3
quote:
True, enough about the framerate. I don't think that the Larrabee "demo" was moving, they just showed pre-rendered still images of what it could do if it was functioning. Anand's article state's the following:


I think you're missing the point on what ray tracing is. The image wasn't pre-rendered, it was rendered using ray tracing. You make it sound like it's simply taking an image and tossing it up on the screen. No, it functions by taking objects plotting them pixel by pixel and determining their result by "tracing" the path of light through an image plane. It functions, but like other posters already pointed out any processor out there today can accomplish this very task.


By therealnickdanger on 9/24/2009 10:24:58 PM , Rating: 3
No, I know what ray-tracing is, but some of the articles and comments I've read so far are reporting as if this was a real-time 3-D demo, as in "motion", when it was simply a couple selected frames. That's the only discrepancy I was commenting on.


By Veerappan on 9/25/2009 3:02:49 PM , Rating: 2
There's actual videos of this demo out on the net. It was being rendered on the fly, and was not just a selection of pre-rendered frames. The comment about it not yet running at full speed was a comment on the frame rate, which was probably below 30 fps.


By StevoLincolnite on 9/24/2009 10:04:23 AM , Rating: 3
It's interesting really, Intel pretty much refused to add TnL hardware to any of it's IGP's instead, to have the CPU to render it, then even with IGP's like the x3100, Intel have a system where the drivers will choose to render TnL in software or hardware depending on the game and which mode will give better performance.

To this day, Intel still has never released a GPU with dedicated TnL hardware, instead those operations are performed on the vertex shaders these days.

I guess they were looking at it from the perspective that the more reliant the hardware is on the CPU, the more demand there will be for faster CPU's to help render the 3D application and thus in-turn sell more processors which earns them more money, that is just my theory on it all, take it or leave it. :)


By Fireshade on 9/25/2009 11:29:27 AM , Rating: 3
I believe that has always been Intel's strategy for everything they bring out, even standards.
E.g. Intel is the architect of the USB standard. And guess what: USB relies on CPU-processing. There are many more of these little examples.


By LRonaldHubbs on 9/24/2009 9:17:55 PM , Rating: 4
quote:
Ray-tracing requires huge computing power.

Up to some point, yes. Required computational power for ray tracing grows logarithmically with scene complexity, while it grows linearly for traditional rendering. Thus at some point ray tracing will become the more economical option and will finally see widespread adoption. The fact that a large company like Intel is investing in this technology means we are finally nearing that point.


"Mac OS X is like living in a farmhouse in the country with no locks, and Windows is living in a house with bars on the windows in the bad part of town." -- Charlie Miller

Related Articles



Latest Headlines
4/21/2014 Hardware Reviews
April 21, 2014, 12:46 PM
4/16/2014 Hardware Reviews
April 16, 2014, 9:01 AM
4/15/2014 Hardware Reviews
April 15, 2014, 11:30 AM
4/11/2014 Hardware Reviews
April 11, 2014, 11:03 AM










botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki