backtop


Print 60 comment(s) - last by Integral9.. on Sep 29 at 7:55 AM


Working silicon of Larrabee, Intel's upcoming 2010 discrete GPU was shown running a ray-traced scene from id Software's Enemy Territory: Quake Wars. The GPU will become the only desktop GPU based on an x86 architecture. Intel brags of its efficiency, saying the water effects (reflections, transparency, etc.) were accomplished with only 10 lines of C++ code.  (Source: CNET/YouTube)
Intel prepares to jump into the discrete graphics market

In the 1990s a plethora of graphics chip makers rose and fell.  Eventually only two were left standing -- NVIDIA and ATI.  ATI would later be acquired by AMD.  Though AMD was in a fierce battle with Intel in the multicore era, it also managed to score some points in the last generation graphics war with its 4000 series cards (RV770). 

Before this happened a third player, Intel, had attacked the market and quietly built up a dominant marketshare in low-end graphics. Its integrated GPUs popped up on netbooks, low end laptops, and even a number of low-end desktops.  Now the graphics market seems poised to flip yet again.  NVIDIA is on the verge of seizing much of Intel's low-end and mobile marketshare, thanks to its Ion platform and new mobile GPUs.  And AMD (ATI) looks poised to attack in the latest round of the discrete graphics wars with aggressively priced DirectX 11 GPUs.  That leaves Intel, which is preparing to take on NVIDIA and AMD with a brand new discrete graphics offering called Larrabee.

First revealed at SIGGRAPH in August of 2008, Larrabee is now reality.  Intel has just shown off working models of Larrabee GPUs at its Intel Developers Forum in San Francisco.

Built on a multicore die-shrink of Intel's Pentium 54C architecture, the powerful new graphics chip was able to render a ray-traced scene from the id Software game, Enemy Territory: Quake Wars, with ease.  Ray-tracing is an advanced technique that has long been touted as an eventual replacement to rasterization in video games.  Currently it is used for the high quality 3D animation found in many films.

The demo also gave a peak at Gulftown, a 6 core, Core i9 processor. Built on the new upcoming Westmere architecture (a Nehalem die shrink), Gulftown is the "extreme edition" counterpart of Clarkdale (desktop) and Arrandale (laptop). 

Larrabee, however received the most interest.  The working chip was said to be on par with NVIDIA's GTX 285.  With AMD's latest offerings trouncing the GTX 285 in benchmarks, the real question will be the power envelope, how many Larrabees Intel can squeeze on a graphics board, what kind of Crossfire/SLI-esque scheme it can devise, and most importantly the price.

A direct comparison between NVIDIA's GTX 285 and Larrabee is somewhat misleading, though, because Larrabee is unique in several ways.  First Larrabee supports x86 instructions.  Secondly, it uses tile-based rendering to accomplish task like z-buffering, clipping, and blending that its competitors do in hardware, with software instead (Microsoft's Xbox 360 works this way too).  Third, all of its cores have cache coherency.  All of these features stack up to (in theory) make Larrabee easier to program games for than NVIDIA and AMD's discrete offerings.

The GPU also still has some time to grow and be fine tuned. It's not expected until the first half of 2010.  Intel is just now lining up board partners for the new card, so it should be interesting to see which companies jump on the bandwagon.

At IDF Intel also gave hints at the rest of its upcoming graphics plans.  It says that it's preparing to take a "big jump ahead" in integrated graphics with an upcoming product to be released next year, which will compete with the NVIDIA Ion.  Intel is also preparing a system-on-a-chip package with Larrabee and an Intel x86 processor, which it is using to target the mobile and handheld devices market. 

It's unclear, though, how low Intel can push the embedded Larrabee's power envelope or when the SoC will arrive.  It did make it clear, though, that it is targeting the chip for "non-PC" applications.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By Mjello on 9/24/2009 9:22:32 AM , Rating: 5
10 lines of code to run a raytracer algorithm.. Daah, thats the nature of a raytracer and nothing to do with efficiency. A normal x86 cpu can do that, and nvidia and ati's gpu can certainly do that too.

Raytracing is not desirable on its own. Its way too inefficient. Photo realistic rendering is way more complicated than a basic raytracer can produce.




By stmok on 9/24/2009 9:32:11 AM , Rating: 2
Agreed. Ray-tracing requires huge computing power. Whether Larrabee hardware can do it with acceptable frame rate is another story.

Larrabee can also do rasterization. Its graphics stack is software based.


By therealnickdanger on 9/24/2009 10:01:21 AM , Rating: 4
True, enough about the framerate. I don't think that the Larrabee "demo" was moving, they just showed pre-rendered still images of what it could do if it was functioning. Anand's article state's the following:

quote:
The demo was nothing more than proof of functionality , but Sean Maloney did officially confirm that Larrabee's architecture would eventually be integrated into a desktop CPU at some point in the future ... Larrabee rendered that image above using raytracing, it's not running anywhere near full performance.

http://www.anandtech.com/cpuchipsets/showdoc.aspx?...


By Mitch101 on 9/24/2009 12:32:43 PM , Rating: 2
I'm pretty sure they wont be able to get RayTracing to any acceptable frame rate to complete with Rasterization today but RayTracing is the future of gaming. Pretty cool to see that its down the road and around the corner.

Intel competing in Rasterization is slim but they have deep pockets, are firing on all cylinders lately, and dont count out the x-factor of being the leader in chip fabrication. NVIDIA and AMD tend to stumble in manufacturing where Intel excels.

If they do get a leg up in Raytracing over the competition it could be enough to peak the interest of places with render farms.

On a far away note NVIDIA should be a bit worried if Intel is able to get some developer wins with an x86 based graphics card and they should get them even if the chip is used as sort of a co-processor to the CPU. NVIDIA doesn't have an x86 license and I doubt Intel would grant one. But if they did how long and how much would it take NVIDIA to develop and x86 based chip worth buying?


By freaqie on 9/24/2009 1:49:48 PM , Rating: 2
actually they probably will.
if you have a quadcore check out arauna realtime raytracing.
it does about 40 fps on a decent quadcore at reasonable resolution (1024 or something)

taking the fact into account that larrabee will have a multitude more cores and it can handle 16 vectors....
it should defenitaly be able to do so.

http://igad.nhtv.nl/~bikker/


By FITCamaro on 9/24/2009 3:22:58 PM , Rating: 1
A core on Larrabee is not equal to one of the cores on a Core 2 Duo or Core i5/7 quad core. They are far less powerful.


By freaqie on 9/24/2009 4:48:09 PM , Rating: 4
and built specifically for raytracing...

it can process 16 vectors per clock per core.
a I7 can do 4. so that is 4 times more efficient.
also it has way more cores.
raystracing scales almost linearly with the amount of cores.

as for clockspeed.. well that is relatively unknown.
but:

intel clamed larrabee would do 2 teraflops.
and larrabee (without die shrink ) would have 32 cores.
so 32 cores x 16 single-precision float SIMD per core x 2GHz per core = 2 TFLOPS

if i'm correct that would imply larrabee is 2ghz.
larrabee can do 16 vectors per core per clock

2 ghz is 2.000.000.000 hertz
if so 32x16x2 000 000 000= 1 024 000 000 000.......

i assume i am doing something wrong here.
because if it's theoretical performance would be:

1 024 000 000 000 vectors per second...

that is a lot...
it will suck in a lot of other things.
but it's good in all things required for raytracing.

and NO way a I7 can ever do that many rays per second.

ofcourse this is theoretcial peak performance
but if we even ake 50 or 25% of that.
it is still way faster then any quadcore.

but, assuming it meets the projections...

according to intel this thing was working on 10% of the power.
i don't buy that. but 20-25% i assume possible.
that means the performance (this version ran the raytracer at about the speed of a 16core core 2 based xeon system.

so it will then end up being as fast as 64 Xeon core 2 quad cores.

and that is not too shabby i think...
again i am just speculating... but don;t write it off just yet.. it was Ax silicon.. not the just taped out B0


By freaqie on 9/24/2009 4:50:17 PM , Rating: 2
i mean 64 cores in total...
so 64 cores in total, in a core 2 quad based system


By jconan on 9/26/2009 9:30:53 PM , Rating: 2
does http://www.nvidia.com/object/siggraph_2009.html Nvidia's implementation of raytracing use the cpu or is heavily gpu based in comparison to intel's?


By conquistadorst on 9/24/2009 4:52:57 PM , Rating: 3
quote:
True, enough about the framerate. I don't think that the Larrabee "demo" was moving, they just showed pre-rendered still images of what it could do if it was functioning. Anand's article state's the following:


I think you're missing the point on what ray tracing is. The image wasn't pre-rendered, it was rendered using ray tracing. You make it sound like it's simply taking an image and tossing it up on the screen. No, it functions by taking objects plotting them pixel by pixel and determining their result by "tracing" the path of light through an image plane. It functions, but like other posters already pointed out any processor out there today can accomplish this very task.


By therealnickdanger on 9/24/2009 10:24:58 PM , Rating: 3
No, I know what ray-tracing is, but some of the articles and comments I've read so far are reporting as if this was a real-time 3-D demo, as in "motion", when it was simply a couple selected frames. That's the only discrepancy I was commenting on.


By Veerappan on 9/25/2009 3:02:49 PM , Rating: 2
There's actual videos of this demo out on the net. It was being rendered on the fly, and was not just a selection of pre-rendered frames. The comment about it not yet running at full speed was a comment on the frame rate, which was probably below 30 fps.


By StevoLincolnite on 9/24/2009 10:04:23 AM , Rating: 3
It's interesting really, Intel pretty much refused to add TnL hardware to any of it's IGP's instead, to have the CPU to render it, then even with IGP's like the x3100, Intel have a system where the drivers will choose to render TnL in software or hardware depending on the game and which mode will give better performance.

To this day, Intel still has never released a GPU with dedicated TnL hardware, instead those operations are performed on the vertex shaders these days.

I guess they were looking at it from the perspective that the more reliant the hardware is on the CPU, the more demand there will be for faster CPU's to help render the 3D application and thus in-turn sell more processors which earns them more money, that is just my theory on it all, take it or leave it. :)


By Fireshade on 9/25/2009 11:29:27 AM , Rating: 3
I believe that has always been Intel's strategy for everything they bring out, even standards.
E.g. Intel is the architect of the USB standard. And guess what: USB relies on CPU-processing. There are many more of these little examples.


By LRonaldHubbs on 9/24/2009 9:17:55 PM , Rating: 4
quote:
Ray-tracing requires huge computing power.

Up to some point, yes. Required computational power for ray tracing grows logarithmically with scene complexity, while it grows linearly for traditional rendering. Thus at some point ray tracing will become the more economical option and will finally see widespread adoption. The fact that a large company like Intel is investing in this technology means we are finally nearing that point.


By FITCamaro on 9/24/2009 9:54:51 AM , Rating: 3
It'll be interesting to see how Larrabee truly performs. If its a serious contender, it'll certainly shake up the market. If nothing else, maybe it'll become the defacto standard for Intel's integrated graphics. I mean they can't get any worse can they?


By StevoLincolnite on 9/24/2009 10:06:04 AM , Rating: 2
quote:
I mean they can't get any worse can they?


You better touch wood or else! :P


By ClownPuncher on 9/24/2009 5:29:03 PM , Rating: 2
Too funny; I assume that is the Australian version of "knock on wood"?


By overzealot on 9/25/2009 9:08:03 AM , Rating: 2
Touching and knocking wood are quite synonymous - the idea came before English was a common tongue, multiple translations are abound and only Americans would presume that their way was the norm.


By Headfoot on 9/28/2009 6:50:02 PM , Rating: 2
Thanks for the baseless prejudice.

Care to share any more?


By Spoelie on 9/24/2009 9:56:04 AM , Rating: 4
* Indeed Ray tracing is a brute force approach, it is not a replacement for rasterization. For a lot of the work involved in building a 3D scene, rasterization will always be the preferred, faster and most efficient way. The future is not for ray tracing to replace rasterization completely, but for ray tracing to complement it (e.g. render the reflections/shadowing after rasterization).
* Efficiency as used in the article is misleading, it is solely meant here as #LOC/effect, not #calculations/effect, #time/effect, #joules power consumed/effect.
* How will games be easier to program for larrabee? They all use opengl/directx apis. This will not change with larrabee, the difference should only be visible in GPGPU applications.


By sefsefsefsef on 9/24/2009 11:55:47 AM , Rating: 3
Rasterization is faster because it's less accurate. Rasterization fudges the lighting model. Raytracing does not. For this reason, raytracing will NEVER go away, and some people will always think its takeover of rasterization is imminent.

Efficiency in code size is not insignificant as you seem to be suggesting. The smaller the code, the more space there is for data, generally speaking. Having said that, raytracing doesn't benefit too much from caching. Most of the data gets used only once before it would be replaced with a normal cache line replacement policy in a reasonably sized cache, and writes should never be cached. Raytracing works best when data is by default not cached and only some blocks are selectively cached (like root nodes of data structures that are accessed by all threads). I cannot speak on cache utilization of rasterization, because I have less experience in that field.


By Spoelie on 9/24/2009 12:31:26 PM , Rating: 2
carmack on raytracing:
http://www.pcper.com/article.php?aid=532

conclusion: not in its current form, rasterization a better solution, even if it's less exact


By William Gaatjes on 9/24/2009 1:45:06 PM , Rating: 2
But even Carmack confirms that ray tracing is a good thing. And that he is planning to use ray tracing together with rasterization. I agree with that rasterization may be preferred for a few generations of gpu's to come.

However, i think Intel has some tricks up there sleeves.

One is to use raytracing also for collision detection and maybe they found a way to do realistic physics with ray tracing too. After all they did also aquire Havoc and neoptica.

http://www.theinquirer.net/inquirer/news/1001006/i...

http://blogs.intel.com/research/2007/10/real_time_...

http://www.beyond3d.com/content/news/534

My theory is that Intel found a lot more usefull solutions to old seemingly unrelated problems problems with raytracing. I would not be surprised if Intel did not come up with just a demo but a complete game engine. A game engine that uses the accuracy of raytracing for many tasks.


By geddarkstorm on 9/24/2009 2:46:50 PM , Rating: 2
Caustic has some things to show you then.

http://pcper.com/comments.php?nid=7801

If they succeed, well, technically they already have, their system is far faster than any raytracer to date by a long shot, already. But, if they can get raytracing up to the same speed point as rasterization, the latter will be eclipsed. I'm sure it'll still have its uses though.


By kattanna on 9/24/2009 11:30:24 AM , Rating: 2
the big problem it will face is coding for it.

if it cant do its special magic on its own with standard DX coding from the game, its not going to fly. no game dev is going to want to write multiple graphics engines into their game to support this thing.


By MrBlastman on 9/24/2009 1:39:02 PM , Rating: 2
This is why more games should use Open GL instead of Direct X. It is an open standard that can be ported to multiple platforms and operating systems. Freespace 2's Source Code Project uses Open GL and the results are stunning.


By FITCamaro on 9/24/2009 3:30:21 PM , Rating: 4
OpenGL is the same way though.

OpenGL is operating system independent. The point of graphics APIs is so you don't care about the GPU. The driver handles the rest.


By overzealot on 9/25/2009 9:03:02 AM , Rating: 2
To quote Carmack again: graphics API code contributes no more than 9% of all game code.
If the demand is there, it's insignificant (compared to the power of the force).


Westmere architecture?
By hessenpepper on 9/24/2009 9:28:21 AM , Rating: 3
The articles I've read describe it as based on the Pentium architecture.

http://www.anandtech.com/cpuchipsets/intel/showdoc...

Westmere manufacturing process maybe.




RE: Westmere architecture?
By ksherman on 9/24/2009 9:39:26 AM , Rating: 2
Was thinking the same thing...

More info! I'm pretty excited to see what Intel pulls off with Larrabee.


RE: Westmere architecture?
By nafhan on 9/24/2009 9:44:39 AM , Rating: 2
Yeah... if it was based on Westmere, it would probably have less than 32 cores.


RE: Westmere architecture?
By asdf23fvas324rf on 9/24/2009 9:54:13 AM , Rating: 3
your misunderstanding here, though this is probably because of the way the article is writter.

larrabee will be built on a 32nm process LIKE westmere. however the chip itself is comprised of many modified pentium pro cores built on said 32nm process. honestly there really wasant much reason to mention westmere in the sentence, they could have easily said it would be built on 32nm and been fine.


RE: Westmere architecture?
By Natfly on 9/24/2009 12:56:55 PM , Rating: 2
Well the article is written wrong, or misinterpreted the original article from InformationWeek.

quote:
"Built on Intel's Westmere architecture, a 32 nm die-shrink of Nehalem"


It's not built on Westmere as far as I know. From the InformationWeek source article:

quote:
"Larrabee was used in conjunction with Gulftown, codename for a six-core processor scheduled for release next year. The chip is based on Westmere, codename for a 32-nanometer variant of Intel's current 45-nm Nehalem microarchitecture."


They are talking about Gulftown being based on Westmere, not Larabee.


RE: Westmere architecture?
By johnsonx on 9/24/2009 1:06:33 PM , Rating: 5
People, please, it's Jason Mick. You can't expect him to understand or properly quote his sources, much less write coherently.


question
By yacoub on 9/24/2009 9:21:09 AM , Rating: 5
quote:
was able to render a ray-traced scene from the upcoming id Software game , Enemy Territory: Quake Wars

I don't think you want the word "upcoming". :)




RE: question
By HaB1971 on 9/24/2009 10:54:08 AM , Rating: 4
quote:
was able to render a ray-traced scene from the utterly abysmal id Software game, Enemy Territory: Quake Wars


There, fixed it for you.


RE: question
By freaqie on 9/24/2009 2:04:41 PM , Rating: 2
quote:
was able to render a ray-traced scene from the utterly abysmal SPLASH DAMAGE
game, Enemy Territory: Quake Wars

There, fixed it for you...


RE: question
By ipay on 9/25/2009 6:47:40 AM , Rating: 2
ET:QW uses the id Tech 4 engine, which was originally released in 2004 with Doom 3. So it's hardly the ideal platform to use for showing off Larrabee, especially if it can only render at 10fps!

On the other hand, it's also the only relatively modern game engine that's free, and I'm sure Intel aren't going to try to rewrite CryEngine 2 to use ray tracing...


RE: question
By overzealot on 9/25/2009 9:16:26 AM , Rating: 2
The engine doesn't make the game.
A good game would overcome all engine trouble.
I enjoyed Prey, and even Quake 4. They never looked that good, but the story telling was at least as good as the Halo series.
Only talking about single-player here.
I've played plenty of better MP games.

I spent more time playing the QW beta than the retail game, but I still reckon it was the best map they made.


compared to GTX 285?
By ratbert1 on 9/24/2009 10:43:04 AM , Rating: 2
THG's Loyd Case said
"The demo was an almost real-time ray tracing demo based on Enemy Territory: Quake Wars. The demo has been shown in the past, although this particular iteration was (supposedly) running on actual Larrabee hardware. It looked to be running at maybe ten frames per second."
Not exactly GTX285 territory.




RE: compared to GTX 285?
By sefsefsefsef on 9/24/2009 12:02:29 PM , Rating: 3
Have you ever seen a GTX 285 trying to do raytracing? I have (well, it was a 280, not 285), and it's not pretty. Rasterization is far faster than raytracing because rasterization makes far more approximations. The GTX 285 is built to do lots of rasterization in parallel, and it fails at raytracing (because two threads [read: pixels] that start out running nearly parallel to each other [and therefore touching the same data structures and running the same operations on them, which gives great efficiency] tend to diverge eventually, so eventually the threads in a block will be touching wildly different data (which is kind of a big deal) and running different code on that data (which is a HUGE deal in nVidia CUDA land).

Intel Larrabee is equally suited to both.


RE: compared to GTX 285?
By William Gaatjes on 9/24/2009 1:59:24 PM , Rating: 2
quote:
Third, all of its cores have cache coherency.


If this statement from the text is true, then this is i think one of the reasons what gives larrabee it's advantage over the GTX285.


RE: compared to GTX 285?
By encia on 9/27/2009 10:04:18 PM , Rating: 2
http://www.tgdaily.com/content/view/38145/135/
Refer to AMD's Cinema 2.0.

"Watch out, Larrabee: Radeon 4800 supports a 100% ray-traced pipeline "


Everything done in Software??
By HrilL on 9/24/2009 11:47:21 AM , Rating: 5
Looking at Intel's track record with drivers I wouldn't touch one of these. If everything is going to be done in software on these cards than driver support is going to be key and honestly I can't trust Intel to make decent drivers. They've never been able to so what makes people think this will change now? Intel has a long way to go before they'll be able to compete in the high end graphics market.




RE: Everything done in Software??
By Mojo the Monkey on 9/24/2009 1:11:25 PM , Rating: 2
Agreed. I think we all (tech-heads) have been unwillingly stuck dealing with an onboard intel graphics chip at one time or another. The drivers are horrible. I mean, I wish they at least recognized that the chip exists for 2D purposes (mainly) and allow more options with color optimization ( Saturation, anyone? ) and multi-screen connections.


RE: Everything done in Software??
By 9nails on 9/24/2009 1:43:55 PM , Rating: 2
I remember when SuSE Linux came out with a cool new 3D interface that it worked really well on Intel integrated GPU's because of open source driver community wrote the drivers. And Nvidia/ATi's drivers were closed source and didn't work.

For Windows machines, the Intel drivers are junk. I know that I've had to upgrade Intel graphics drivers on several computers just to get buttons to show up in word processing or email clients. I can't recall if I've ever had to do that for an ATI or Nvidia graphics chip.


Hmm...
By StevoLincolnite on 9/24/2009 9:59:20 AM , Rating: 5
quote:
In the 1990s a plethora of graphics chip makers rose and fell. Eventually only two were left standing -- NVIDIA and ATI.


We have seen Intel compete with ATI and nVidia in the discreet Graphics card market before with the Intel i740 AGP graphics card, however it had poor driver support, much worst than there current drivers, which says something! And generally average performance, despite using the then-new AGP bus with it's large bandwidth advantage over PCI.

We have lost allot of graphics chip manufacturers, some are still around, Matrox changed there focus away from the Consumer market after the pretty dismal release of the Matrox Parhelia which got pretty beaten down by the Geforce 4 Ti and Radeon 9xxx series.

S3, which like intel went on to become a series of IGP's for VIA, and also make Discreet GPU's as well under the banner of "Chrome".

SiS still have pretty dismal IGP's as well, And PowerVR/STMicroelectrics pretty much fell off the map after there Kyro cards, which use a Tile-based renderer.
Which is also licensed by Intel for there IGP's, but they do compete well in the Ultra portable/low powered markets.

And the venerable 3DFX was gobbled up by nVidia, which the "3dfx" team ended up bringing us the pretty dismal Geforce FX, and gave nVidia the rights to use the acronym "SLI" as a method to combine 2 Graphics cards to help render a scene. (ATI had this technology back then as well with the Rage Fury Maxx combining 2 GPU's on the same card).

I officially feel old, as I've seen the rise and fall of many Graphics chip company's/products, and seen games advance over the years, Intel is just adding another chapter, whether it's successful or not remains to be seen, but one thing is to be sure, they have allot of hard work with there drivers if the i740/GMA series is anything to go by.

Back in the day (Think Pre-TnL) there were Dozens of GPU manufacturers, all competing for a piece of the exploding PIE, most tried for the Performance crown (3dfx), others tried for the best low-cost solutions (S3), some for maximum API compatibility and image quality (nVidia and ATI), some even focused on the best 2D Image quality and 2D performance (Matrox), yet all that remained after the dust settled was ATI and nVidia, however back then there was a massive amount of competition, if you went with a manufacturer you might have been locked out of certain game releases, for Instance if you bought ATI or nVidia, you were locked out of 3dfx GLIDE-only games, or the reverse was true, as 3dfx didn't support Direct 3D on a level like most other companies.

How far we have come... Back then a graphics card release was something almost seemingly un-expected as it felt like every week there was a new GPU released, now we have a pretty solid Tick-Tock strategy employed by ATI and nVidia where you can almost guess the month it will be released in well in advance.




RE: Hmm...
By MrRuckus on 9/24/2009 7:35:10 PM , Rating: 2
quote:
Back in the day (Think Pre-TnL) there were Dozens of GPU manufacturers, all competing for a piece of the exploding PIE, most tried for the Performance crown (3dfx), others tried for the best low-cost solutions (S3), some for maximum API compatibility and image quality (nVidia and ATI), some even focused on the best 2D Image quality and 2D performance (Matrox), yet all that remained after the dust settled was ATI and nVidia, however back then there was a massive amount of competition, if you went with a manufacturer you might have been locked out of certain game releases, for Instance if you bought ATI or nVidia, you were locked out of 3dfx GLIDE-only games, or the reverse was true, as 3dfx didn't support Direct 3D on a level like most other companies.


Back in those days, Nvidia was seriously junk. Think back to the Diamond Viper V330 with the Riva 128 chipset. While it was raved as being very fast, it was because Nvidia cut corners on its image quality. Ever try playing a game with that chipset? Wheels were blocks. I bought that card and played 5mins of Interstate 76 and took it back. The TNT was the first true chip to come from Nvidia in my eyes (I picked up a TNT based Diamond Viper V550). Before that they were garbage IMO.

You also forgot Rendition. They were a contendor in the lower end of the market. I bought a Diamond Stealth S220 which got me Monster3D like performance on a single 2D/3D card which had the V2100 chipset. I believe they also had their own API then went no where fast, Redline I believe it was called?


By Pneumothorax on 9/24/2009 10:04:06 AM , Rating: 3
I dunno, but Intel's current IGP offering still has serious bugs that were never completely addressed. I really hope Intel invested in a driver team for Larabee.




By Veerappan on 9/25/2009 3:44:25 PM , Rating: 2
I remember reading a while back that Intel had hired a separate driver team for Larabee. It's not the same people that currently work on the drivers for Intel's integrated graphics.


By Targon on 9/25/2009 7:13:53 AM , Rating: 2
People buy video cards to handle graphics, and it is a VERY rare person who cares more about GPGPU stuff than about graphics for games/applications.

So, DirectX, or OpenGL is the name of the game. Extra features mean NOTHING if the product can't do well what people want it to do.

So, can it keep up with even the low end AMD or NVIDIA graphics cards when it comes to current applications? If the answer is no, then it won't sell well in the consumer space.

At this point, even the Intel integrated stuff ONLY sells because Intel provides a discount on processors to OEMs if they buy the two together. This makes for cheap systems, but it ALWAYS means that there is a trade-off for low cost systems.

So, you have systems with an Intel processor with inferior graphics capability, or you have an AMD processor with acceptable integrated graphics. You have to pay a bit more to get the Intel processor with NVIDIA graphics.

What the laptop industry REALLY needs is to get behind a STANDARD laptop PCI-Express slot which would make it so you can pick and choose what graphics you get in a laptop, rather than being stuck with something supplies by the chipset manufacturer. Until that happens, low cost Intel systems will have inferior graphics.

On a positive note, since your average consumer would find the AMD processors to be fast enough for their needs, they won't find that there IS a trade-off to meet the demands. If they want graphics, they CAN get an AMD processor. If CPU power is more important, they can go with an Intel processor in a low cost machine. But if Intel can't make a graphics product that can handle DirectX 11 at least as well as the integrated AMD or NVIDIA products at launch, no one will want the thing.

And if x86 processing is what you want, we will be seeing 6-core processors next year, and you won't need special programming to make use of the extra cores.




By Penti on 9/25/2009 8:42:10 AM , Rating: 2
To be fair everything is software these days, fixed function is long gone, there is a completely different team on Larrabee, like people as Tom Forsyth who has previously written a DX software renderer. GPGPU is also important today, legacy GLSL, coming OpenCL, CAL/CUDA and it's becoming more important every day even in the consumer space. If it means they can do something faster on Intels chip that's great. It's a very flexible solution and I think it will have some performance. It's not their new IGP though.


Intel will hide many nvidia and ATI GPU's
By maddoctor on 9/24/2009 2:50:56 PM , Rating: 1
Intel will come to discreete GPU market with full force of team. I think they will pay every retailer to stock Intel Larrabee and make ATI and NVIDIA to be hiding. I believe this marketing strategy will works and decreased competitors marketshare. Intel is the king so they can forced its own rule to everyone in this world.




By ipay on 9/25/2009 12:23:28 PM , Rating: 2
Protip: learn how to type English.


By Codeman03xx on 9/26/2009 3:53:37 AM , Rating: 1
Good thing for graphics all around could you imagine running a video card with a Larrabee. Nvidia and AMD are really gonna have to compete with Intel to get a product out that will preform far further than Intel's larrabee or we may see the end of GPU's all together, personally im excited I hope we see video cards doubling the performance of larrabee so that we can get true real life games up and running. Ha Australia bans Left 4 dead 2 wait till they see some of the FPS's with some of the new tech coming out, it will be sick.




By tviceman on 9/26/2009 12:55:23 PM , Rating: 2
Your expectations of this card are way, way too high. Intel has never produced an adequate graphics solution - and thus far the reaction to the latest larabee demo - if you do some research - is that it was underwhelming.

Larabee will not be challenging nvidia or amd in the enthusiast, high end, and likely not the mid-range segment either.


Not that fast?
By nafhan on 9/24/2009 9:49:52 AM , Rating: 2
When chip companies tout all the extra features their new silicon brings, it might mean they're trying to distract us from the fact that it's really not that fast. If the 5870 brought DX11 but sucked at DX9 and DX10 games, it wouldn't be sold out right now.




Head hurts.....
By BenSkywalker on 9/24/2009 9:51:26 PM , Rating: 2
quote:
First Larrabee supports x86 instructions. Secondly, it uses tile-based rendering to accomplish task like z-buffering, clipping, and blending that its competitors do in hardware, with software instead (Microsoft's Xbox 360 works this way too).


This is painful to read, seriously. Neither Larrabee nor the 360 do what is normally called tile based rendering, both of them access memory in tiles, but every rasterizer since the Voodoo1 has done that. The 360 uses a straight up typical rasterizer with some on die eDRAM. Larrabee doesn't render any particular way by default, outside of texture sampling hardware it is entirely software driven. The type of rendering it is likely to use, the sort of rendering technique Abrash documented a while back, has some common elements with what is normally called a TBR, but in actual implementation it is worlds different. What we consider a TBR uses a ray to determine visibility which then goes on to be rendered, Larrabee uses the ray to render. May sound like splitting hairs, but if you look into it it is a huge difference- "TBR"s are rasterizers with a sophisticated visibility front end, a ray tracer is a ray tracer.




"Larrabee easier to program"?
By Integral9 on 9/29/2009 7:55:02 AM , Rating: 2
Umm... aren't most games written in DirectX? I thought there was basically two choices; DirectX and OpenGL and little bit of CG floating around.

Also, aside from the obvious driver mountain Intel is going to have climb, they've got another huge mountain to climb with the brand loyal gamers out there. Not to mention the game companies... I know some people that would rather sell their index finger than trade their graphics card for the other brand's offering. And that's between two known and, honestly, very good graphics card companies.

Back to the driver issues. Intel's got quite a scape goat there. I can see it now:
Benchmark:
nVidia: 87 Gadgillion points
ATI: 87 Badgillion points
Intel: 42 points

Intel's response: "the test was done using bad drivers. Here try these."
nVidia & ATI: "They cheated with their drivers!"




"If you can find a PS3 anywhere in North America that's been on shelves for more than five minutes, I'll give you 1,200 bucks for it." -- SCEA President Jack Tretton

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki