backtop


Print 60 comment(s) - last by Integral9.. on Sep 29 at 7:55 AM


Working silicon of Larrabee, Intel's upcoming 2010 discrete GPU was shown running a ray-traced scene from id Software's Enemy Territory: Quake Wars. The GPU will become the only desktop GPU based on an x86 architecture. Intel brags of its efficiency, saying the water effects (reflections, transparency, etc.) were accomplished with only 10 lines of C++ code.  (Source: CNET/YouTube)
Intel prepares to jump into the discrete graphics market

In the 1990s a plethora of graphics chip makers rose and fell.  Eventually only two were left standing -- NVIDIA and ATI.  ATI would later be acquired by AMD.  Though AMD was in a fierce battle with Intel in the multicore era, it also managed to score some points in the last generation graphics war with its 4000 series cards (RV770). 

Before this happened a third player, Intel, had attacked the market and quietly built up a dominant marketshare in low-end graphics. Its integrated GPUs popped up on netbooks, low end laptops, and even a number of low-end desktops.  Now the graphics market seems poised to flip yet again.  NVIDIA is on the verge of seizing much of Intel's low-end and mobile marketshare, thanks to its Ion platform and new mobile GPUs.  And AMD (ATI) looks poised to attack in the latest round of the discrete graphics wars with aggressively priced DirectX 11 GPUs.  That leaves Intel, which is preparing to take on NVIDIA and AMD with a brand new discrete graphics offering called Larrabee.

First revealed at SIGGRAPH in August of 2008, Larrabee is now reality.  Intel has just shown off working models of Larrabee GPUs at its Intel Developers Forum in San Francisco.

Built on a multicore die-shrink of Intel's Pentium 54C architecture, the powerful new graphics chip was able to render a ray-traced scene from the id Software game, Enemy Territory: Quake Wars, with ease.  Ray-tracing is an advanced technique that has long been touted as an eventual replacement to rasterization in video games.  Currently it is used for the high quality 3D animation found in many films.

The demo also gave a peak at Gulftown, a 6 core, Core i9 processor. Built on the new upcoming Westmere architecture (a Nehalem die shrink), Gulftown is the "extreme edition" counterpart of Clarkdale (desktop) and Arrandale (laptop). 

Larrabee, however received the most interest.  The working chip was said to be on par with NVIDIA's GTX 285.  With AMD's latest offerings trouncing the GTX 285 in benchmarks, the real question will be the power envelope, how many Larrabees Intel can squeeze on a graphics board, what kind of Crossfire/SLI-esque scheme it can devise, and most importantly the price.

A direct comparison between NVIDIA's GTX 285 and Larrabee is somewhat misleading, though, because Larrabee is unique in several ways.  First Larrabee supports x86 instructions.  Secondly, it uses tile-based rendering to accomplish task like z-buffering, clipping, and blending that its competitors do in hardware, with software instead (Microsoft's Xbox 360 works this way too).  Third, all of its cores have cache coherency.  All of these features stack up to (in theory) make Larrabee easier to program games for than NVIDIA and AMD's discrete offerings.

The GPU also still has some time to grow and be fine tuned. It's not expected until the first half of 2010.  Intel is just now lining up board partners for the new card, so it should be interesting to see which companies jump on the bandwagon.

At IDF Intel also gave hints at the rest of its upcoming graphics plans.  It says that it's preparing to take a "big jump ahead" in integrated graphics with an upcoming product to be released next year, which will compete with the NVIDIA Ion.  Intel is also preparing a system-on-a-chip package with Larrabee and an Intel x86 processor, which it is using to target the mobile and handheld devices market. 

It's unclear, though, how low Intel can push the embedded Larrabee's power envelope or when the SoC will arrive.  It did make it clear, though, that it is targeting the chip for "non-PC" applications.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By Spoelie on 9/24/2009 9:56:04 AM , Rating: 4
* Indeed Ray tracing is a brute force approach, it is not a replacement for rasterization. For a lot of the work involved in building a 3D scene, rasterization will always be the preferred, faster and most efficient way. The future is not for ray tracing to replace rasterization completely, but for ray tracing to complement it (e.g. render the reflections/shadowing after rasterization).
* Efficiency as used in the article is misleading, it is solely meant here as #LOC/effect, not #calculations/effect, #time/effect, #joules power consumed/effect.
* How will games be easier to program for larrabee? They all use opengl/directx apis. This will not change with larrabee, the difference should only be visible in GPGPU applications.


By sefsefsefsef on 9/24/2009 11:55:47 AM , Rating: 3
Rasterization is faster because it's less accurate. Rasterization fudges the lighting model. Raytracing does not. For this reason, raytracing will NEVER go away, and some people will always think its takeover of rasterization is imminent.

Efficiency in code size is not insignificant as you seem to be suggesting. The smaller the code, the more space there is for data, generally speaking. Having said that, raytracing doesn't benefit too much from caching. Most of the data gets used only once before it would be replaced with a normal cache line replacement policy in a reasonably sized cache, and writes should never be cached. Raytracing works best when data is by default not cached and only some blocks are selectively cached (like root nodes of data structures that are accessed by all threads). I cannot speak on cache utilization of rasterization, because I have less experience in that field.


By Spoelie on 9/24/2009 12:31:26 PM , Rating: 2
carmack on raytracing:
http://www.pcper.com/article.php?aid=532

conclusion: not in its current form, rasterization a better solution, even if it's less exact


By William Gaatjes on 9/24/2009 1:45:06 PM , Rating: 2
But even Carmack confirms that ray tracing is a good thing. And that he is planning to use ray tracing together with rasterization. I agree with that rasterization may be preferred for a few generations of gpu's to come.

However, i think Intel has some tricks up there sleeves.

One is to use raytracing also for collision detection and maybe they found a way to do realistic physics with ray tracing too. After all they did also aquire Havoc and neoptica.

http://www.theinquirer.net/inquirer/news/1001006/i...

http://blogs.intel.com/research/2007/10/real_time_...

http://www.beyond3d.com/content/news/534

My theory is that Intel found a lot more usefull solutions to old seemingly unrelated problems problems with raytracing. I would not be surprised if Intel did not come up with just a demo but a complete game engine. A game engine that uses the accuracy of raytracing for many tasks.


By geddarkstorm on 9/24/2009 2:46:50 PM , Rating: 2
Caustic has some things to show you then.

http://pcper.com/comments.php?nid=7801

If they succeed, well, technically they already have, their system is far faster than any raytracer to date by a long shot, already. But, if they can get raytracing up to the same speed point as rasterization, the latter will be eclipsed. I'm sure it'll still have its uses though.


“And I don't know why [Apple is] acting like it’s superior. I don't even get it. What are they trying to say?” -- Bill Gates on the Mac ads

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki