Print 50 comment(s) - last by RAMDRPC.. on Apr 11 at 10:37 AM

Intel looks to get into the graphics market in 2009, and in a big way

Next month heralds the 2008 Spring Intel Developer Forum in Shanghai, China.  Pre-show briefings opened up with a quick mention on the status of Larrabee, Intel's upcoming graphics core.

Larrabee differs significantly from AMD's Radeon and NVIDIA's GeForce processors.  For starters, Larrabee is based on the x86 instruction set found in CPU architecture.  Intel vice president Steve Smith emphasized that Larrabee is not just a GPU, but a multi-core die capable of any stream processing task.

Smith would not detail exactly how many cores reside on Larrabee, though early schematics from 2006 detail designs with 16 cores.  Each in-order core is capable of operating in excess of 2 GHz.

Larrabee can apparently scale to several thousand cores, sharing much of the same research as Intel's Tera-scale project.  In addition to the x86 approach, the company announced it will soon announce another SSE-like extension set, dubbed Advanced Vector Extensions. These extensions will likely be what separates Larrabee's x86 instruction set from the x86 instructions featured on Core 2 Duo and Phenom.  Smith said Larrabee will support OpenGL, DirectX and ray-tracing instructions.

However, to much disappointment, Larrabee will not find a home on 45nm Nehalem processors, scheduled for an early 2009 launch. Smith said Larrabee samples will be ready in Q4 2008, with shipments in 2009, though the initial launch appears to be only for discrete computing.

Likely a fully integrated CPU design will not come until the next architecture redesign of Nehalem, codenamed Sandy Bridge (previously Gesher).

An Intel engineer tells DailyTech, "You have to walk before you can run." Walking, at Intel, means discretely connecting a much simpler GPU into the processor first.

Intel’s hybrid CPU and GPU chips are set to be released in two flavors, both of which will be based on the Nehalem CPU architecture. The first version, dubbed Havendale, will be a desktop chip, while the second version, dubbed Auburndale, will be a notebook chip.

Auburndale and Havendale will have two Nehalem cores paired with a graphics subsystem. The twin cores will share 4MB of L2 cache and feature an integrated dual-channel memory controller that supports memory configurations up to DDR3-1333.

The graphics subsystem will be initially derived from Intel’s G45 integrated graphics. This indicates that neither Auburndale nor Havendale will be for heavy graphics processing, but will be more of an integrated graphics replacement.

According to Intel roadmaps, the new processors are expected to enter the market in the first half of 2009. This beats out the expected time of arrival of AMD’s Fusion processors, which are planned to debut in the second half of 2009.

In the meantime, Smith promises the discrete Larrabee offerings will compete competitively with Radeon and GeForce offerings when its finally announced.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

By Orbs on 3/17/2008 5:15:30 PM , Rating: 5
As a descrete solution, 16 in-order cores each running around 2.0 Ghz sounds promising if compilers mitigate the requirement for in-order operation. If this scales well, and is price-efficient it could even find a home in next gen consoles. AMD and nVidia should be concerned, at least in the descrete graphics market.

However, with most major devs begging Intel for decent integrated performance, and with other Intel IGP offerings supposedly promising improvements in the future, not offering full DX9 or DX10 support in late 2008/early 2009 seems like it will really provide an odd experience for customers who choose that solution. Will Windows Aero run? What games will work at all let alone work decently well (even at low/medium settings)?

Add to that the recent Hybrid SLI and CrossFire support from both AMD and nVidia (and the fact that AMD recently upped the IGP bar) and Intel's CPU + GPU chip seems like a strange sidestep compared to the rest of the market. Seeing as that's where most of the graphics money is and since raising the bar for the low-end performance is critical to gaming on the PC (as one example), I just don't understand that decision.

By KristopherKubicki on 3/17/2008 5:18:46 PM , Rating: 5
I think the hybrid CPUs for Intel low end makes the most sense for a Nehalem chip. With current Intel chipsets, the GPU is integrated into the NB because its a low cost solution. With Nahalem, there's no reason for the NB anymore, so what will Intel offer to low-end customers? Why sell two chips when one does just fine.

Especially for the mobile market, there's going to be a big push for this, even though the GPU is going to fall down when positioned against a discrete mobile GPU.

By Orbs on 3/17/2008 5:21:11 PM , Rating: 5
I agree completely that it makes sense to integrate to the CPU. I also agree that the integrated market isn't really competing against the descrete market.

To my knowledge however, all Intel IGPs currently offer at least full DX9 support. This one won't. Why would someone 'upgrade' to a Nehalem chip if it means fewer features (like losing Aero on Vista, or not being able to play a game they could play before)?

The end-user doesn't care if the graphics are on a Northbridge or the CPU.

By KristopherKubicki on 3/17/2008 5:22:40 PM , Rating: 3
It will support everything G45 does already -- and that includes DX9 and DX10 ... but not all of the instructions like you'd get on a GeForce or Radeon right now

By Orbs on 3/17/2008 5:25:05 PM , Rating: 2
So when the article says:

In fact, both graphics cores leave out support for key features of DirectX 9 and DirectX 10.

is it only referring to optional features?

By KristopherKubicki on 3/17/2008 5:32:40 PM , Rating: 2
The roadmap doesn't really say. I'd assume it's the optional features of DX9, but probably most of DX10.

By ET on 3/17/2008 5:59:14 PM , Rating: 5
Makes no sense. DX10 has few optional features, you can't just do "most of DX10". And if you're doing DX10 you should be able to do all of DX9 including the optional features.

By KristopherKubicki on 3/18/2008 9:28:35 AM , Rating: 2
I removed that statement from the article. Some of my notes are inconsistent on that matter.

By Samus on 3/17/2008 5:57:42 PM , Rating: 2
Time for nVidia to get into the CPU biz.

By imperator3733 on 3/17/2008 9:02:22 PM , Rating: 2
That's what I've been thinking for a while now.

By MrDiSante on 3/17/2008 10:55:29 PM , Rating: 3
Er... Unless they're planning to outsource to TSMC (which will be problematic when you're going against giants like Intel who have a ridiculous advantage in terms of manufacturing process) they might be in over their heads. They have no experience whatsoever of making fabs or anything like that.

By rudy on 3/18/2008 12:51:27 AM , Rating: 2
Get in bed with IBM

By JKflipflop98 on 3/18/2008 2:47:34 AM , Rating: 2
It's working for AMD! ;)

By 16nm on 3/18/2008 2:52:40 PM , Rating: 2
It's working for AMD! ;)

That waits to be seen. But let's hope 45nm will be on par or better than Intel's product.

By JKflipflop98 on 3/21/2008 11:40:45 AM , Rating: 2
Actually, I was pointing out the irony that AMD is already in bed with IBM and they still suck balls.

By amx on 3/18/2008 3:55:02 AM , Rating: 2
It is not as easy as it seems.
Pretty much close to impossible for anyone else to start making x86 based CPUs. To license x86 from intel/amd it would run into hundreds of millions, if not billions.

Why would Intel give x86 license to Nvidia?

By Calin on 3/18/2008 7:44:30 AM , Rating: 2
Also, a non-x86 architecture won't probably have more success than Itanium (also, the Very Long Instructions and Morphing Technology of the Transmeta had little success).

By sabrewulf on 3/19/2008 9:46:55 AM , Rating: 2
I guess you haven't heard that nVidia (Nvidia, nvidia, however they capitalize it now) is trying to buy Via... Via already has an x86 license and a line of CPUs. This doesn't necessarily mean anything of course, I'm just pointing it out as interesting.

By eye smite on 3/17/2008 8:55:06 PM , Rating: 5
This is really just intel's way of not making the same 3 yr screw up they did with AMD over the athlon64, by not leaving any segment of the market open to be dominated by AMD. Will be interesting to see what happens, at least intel is not being underhanded like they were before.

By winterspan on 3/17/2008 9:58:48 PM , Rating: 2
Yeah, its too bad AMD has had so much trouble. And actually this could present a way for AMD to get back into the game.
Think of a cheap AMD platform with a dual core Phenom or even an Athlon derivative that also uses a lower-end ATI Radeon part. Even with a lower-end GPU, AMD could make it have 5x or 10x the performance of an integrated Intel POS.

This could even work for a laptop platform. Use their cheap mid-level K8-derivative laptop CPU, and have a fast lower-end integrated GPU that blows the Intel graphics to bits. Then offer it as a cheaper Intel competitor that can actually play games decently.

By vignyan on 3/18/2008 12:32:59 AM , Rating: 2
Hmm.. I like your optimism... but considering the performace increase in G45 and G47, i dont think your dream of 5x to 10x performance in AMD numbers vs intel numbers is possible... And talking cheap... define the segment that wants real cheap computers for gaming... I would say revise that thought/definition... I know it does exist.. but not of much interest for either of the companies or nvidia...

And also all that you suggested about cheap cpu, cheap gfx, low power... all comes to Intel Atom... you want AMD to do the same? I think its difficult... considering the resource crunch in AMD and the requirement of highly skilled labour for such tasks (these projects need to be done w/ hc of 50 Vs the 500 HC for Major gfx projects) :D

Isn't the first time...
By tfranzese on 3/17/2008 7:24:33 PM , Rating: 3
that Intel's gotten into discrete graphics. Their last time was a flop (Intel 740) and poor performance compared to the quick pace of the industry as well as poor drivers doomed it.

In the meantime, Smith promises the discrete Larrabee offerings will compete competitively with Radeon and GeForce offerings when its finally announced.

Right, maybe their low-end parts.

I'm not excited about this until I see them produce. Drivers have made and broke many manufacturers. Intel continues to demonstrate they don't have what it takes with their poor driver support on current integrated GPUs.

RE: Isn't the first time...
By vignyan on 3/18/2008 12:46:26 AM , Rating: 2
Tell me if you are satisfied with Nvidia/ATI drivers! ;)
Drivers problems exist and occur once in a while and not more often than when you have to reboot your windows loaded machine.

Jokes apart, Intel's first discrete gfx was sure a flop. But i have faith that this radical new design will have gr8r performance than the top dogs of Nvidia/ATI.

Simple calculations:
Assuming a non-ooo, w/ Higher b/w and dedicated memory for a core running at 2GHZ to have 60GFLOPS is only fair. (will have smaller size than silver thorn w/ smaller cache, no fsb)

Nvidia's top-dog, 8800GTX has ~500GFLOPS. Lets assume that this increases to 900GFlops by the time intel kicks in...

now for the division.. 900/60 ~ 15 processor cores... add another core, 960GFLOPS per sec... :) there you go... We have a winner...

Now i am being too optimistic about Intel.

RE: Isn't the first time...
By tfranzese on 3/18/2008 2:00:47 AM , Rating: 2
Sorry, but nVidia and ATi display drivers are worlds better than Intel's are. They are heavily tuned for performance and better support for the big reason people plunk down a lot of money: games. Intel has demonstrated deficiency in both when it comes to the GPU world.

Things always look great on paper (not that I'm even sold on this one).

And this GPU is meant positioned as a discrete solution from all signs so far that I've seen (both that mentioned in this article and the one on /.), so only having good 2D driver support isn't what I'm on about.

RE: Isn't the first time...
By amx on 3/18/2008 4:02:53 AM , Rating: 2
Why should intel tune their graphic drivers for games if its not really designed to get the ultimate performance out of each and every game. There is a reason you go out and buy Nvidia and AMD parts, its because you want the best hardware for your games, Not to see pretty family photos. Actually they do tune their drivers for the games that are popular on their parts, such as Warcraft. When Intel comes out with discrete or higher performing parts in the graphics market, I am sure they will take a step further in tuning drivers.

RE: Isn't the first time...
By tfranzese on 3/18/2008 10:56:00 AM , Rating: 2
Sorry, but I consider the GPU rendering the game correctly to be their biggest problem and wasn't clear. They do tune, but not to nearly the same degree of their rivals. You're SOL if you have integrated graphics and the game you play isn't the handful of mainstream games they make sure render correctly.

RE: Isn't the first time...
By tfranzese on 3/18/2008 2:11:14 AM , Rating: 2
Don't get me wrong though, I'm very interested in these. I'm just skeptical due to Intel's past in this sector which should be obvious :)

RE: Isn't the first time...
By amx on 3/18/2008 3:58:30 AM , Rating: 1
and how is it poor? 100s millions of PCs are running Intel Integrated graphics, what driver issues do you speak of? Windows update/mac update takes care of intel driver updates automatically? I think the quality of AMD/Nvidia drivers is much worse then that of Intel.

RE: Isn't the first time...
By badconsumer on 3/18/2008 10:31:01 AM , Rating: 1
Ha Ha Ha, before the 740 they flopped with the i860 in the early 90's.
The 860 looked like a good idea in the heyday of programmable graphics co-processors, but it was too late. With low cost accelerators like IBM's 8514/a, S3 and Matrox on the horizon, they missed the boat.

The 740 was essentially aquired technology from Real3D and even with that head start they got nowhere.
To be fair I'm sure they rolled some of that into the integrated products.

Either they don't have the skillset or the commitment to execute in this area, graphics is just not their core (no pun intended) competency.

RE: Isn't the first time...
By tfranzese on 3/18/2008 10:57:19 AM , Rating: 2
They did indeed, their current integrated GPUs are derived from the i740.

Not meant to be an Intel bash, and...
By SilthDraeth on 3/18/2008 1:00:45 AM , Rating: 2
I have no idea when initial R&D started on the project, etc. But I find it interesting and sad, that AMD can have a press-release of Fusion plans quite a bit before the ATI purchase, then purchase ATI, have road maps all planned out, appear to be in the lead... only to be beat out by Intel.

And of course it will hurt AMD a lot to not be the first one out the door. I hope Intel's solution works great, and I recommend them to friends. But I really want AMD to kick Intel's ass.

RE: Not meant to be an Intel bash, and...
By Integral9 on 3/18/2008 8:35:30 AM , Rating: 2
It's not sad, it's just size. Intel has 55.6 Billion in assets while AMD has a mear 11.5 Billion. (google finance) Not to mention all the thousands of employees and a few extra fabs. Throw in a marketing team bought with the price of a thousand souls and you have a company that can turn a profit even when their products suck (P4 era).

I don't think being 6 months late will hurt AMD as long as it's only 6 months and AMD does what I think they are going to do and integrate a fully capable GPU to their CPU and not just some glorified backend datacenter 8MB video card. Which it appears that is what Intel is going to release first.

By SilthDraeth on 3/18/2008 10:24:10 AM , Rating: 2
I well understand that Intel can do what it does because of its economic strength.

By imperator3733 on 3/17/2008 5:54:41 PM , Rating: 3
Likely, CPU integration of Larrabee will not come until the 32nm shrink of Nehalem, codenamed Gesher, if ever.

The 32nm shrink of Nehalem is Westmere .
The first version, dubbed Havendale, will be a desktop chip, while the second version, dubbed Auburntown, will be a notebook chip.

I thought I read that the name was Auburndale , not Auburntown.

RE: Errors
By KristopherKubicki on 3/17/2008 11:33:57 PM , Rating: 2
Sorry about that -- I was trying to describe the next architecture after Nehalem: Gesher. I cleaned that up.

What about software ?
By crystal clear on 3/18/2008 8:35:51 AM , Rating: 1
"I am a great believer in luck. The harder I work the more of it I seem to have."

Thats exactly what Intel seems to believe in & the momemtum built upon with Core2 & 45nm technology is being carried forward.

But unfortunately we do not see Software development at the rapid pace to match the hardware.

To exploit the full potential of these new processors & architecture,Intel should now buy up software companies to develop supporting software like IBM does.

Intel Corporation's Multicore Architecture Briefing

RE: What about software ?
By crystal clear on 3/18/2008 11:44:04 AM , Rating: 1
Microsoft and Intel will unveil on Tuesday a plan to fund university research into new ways to program software for multi-core processors, Microsoft confirmed Monday.

The companies will unveil funding for research at the University of California at Berkeley to tackle the challenges of programming for processors that have more than one core and so can carry out more than one set of program instructions at a time, a scenario known as parallel computing.

Microsoft and Intel plan to hold a press conference on Tuesday at 10:00 a.m. PST to discuss the news, which was revealed in The Wall Street Journal and other published reports on Monday. A spokeswoman from Microsoft's public relations firm confirmed the WSJ report but said it was only part of what will be revealed Tuesday.

Those expected to unveil the research on the conference call Tuesday are Andrew Chien, director and vice president at Intel Research, and Tony Hey, a corporate vice president at Microsoft Research.

RE: What about software ?
By crystal clear on 3/19/2008 6:59:33 AM , Rating: 1
REDMOND, Wash., and SANTA CLARA, Calif. — March. 18, 2008 — Intel Corporation and Microsoft Corp. are partnering with academia to create two Universal Parallel Computing Research Centers (UPCRC), aimed at accelerating developments in mainstream parallel computing, for consumers and businesses in desktop and mobile computing. The new research centers will be located at the University of California, Berkeley (UC Berkeley), and the University of Illinois at Urbana-Champaign (UIUC). Microsoft and Intel have committed a combined $20 million to the Berkeley and UIUC research centers over the next five years. An additional $8 million will come from UIUC, and UC Berkeley has applied for $7 million in funds from a state-supported program to match industry grants. Research will focus on advancing parallel programming applications, architecture and operating systems software. This is the first joint industry and university research alliance of this magnitude in the United States focused on mainstream parallel computing.

Parallel computing brings together advanced software and processors that have multiple cores or engines, which when combined can handle multiple instructions and tasks simultaneously. Although Microsoft, Intel and many others deliver hardware and software that is capable of handling dual- and quad-core-based PCs today, in the coming years computers are likely to have even more processors inside them.

RE: What about software ?
By crystal clear on 3/21/2008 11:28:34 AM , Rating: 2
The above three comments now bring about a fullfledged aticle on D.T.

So the ignorant fool who votes me down regularly has proved himself to be just that.

Dont you have anything bettter to do !

People love IGP's for what ??
By bapcorp on 3/17/2008 10:25:51 PM , Rating: 2
I cant believe people are assuming that an IGP will be anything more than entry level graphics, even good (for their time) IGP's were not that great. I have had a nforce2 board with onbaord graphics when it was the top dog of IGP's and did not expect stellar performance from it. By the time this IGP is released if it is, people will be playing QUAKE 5 will be out and we will see 30 fps on medium settings at 1680 x 1050 with this IGP. NVIDIA and AMD will have there cards at 120 fps extreme settings at 2560 x 1600. Wow I just cant wait
P.S. I work in a corporate enviroment and 95 percent of our PC's use IGP because thats all they are good for.

RE: People love IGP's for what ??
By tfranzese on 3/17/2008 11:56:23 PM , Rating: 2
Smith said Larrabee samples will be ready in Q4 2008, with shipments in 2009, though the initial launch appears to be only for discrete computing.

Is where the assumption came from. It will also would make sense in the long run since each of these GPUs can execute x86 instructions they could find place in clusters/supercomputers.

Deja Vu
By whirabomber on 3/18/2008 7:02:43 AM , Rating: 2
The harware market has a tendency to repeat its own history over and over again. First it was everything on its own board, then integrate everything onto the motherboard, then integration was bad so stuff moved back onto the board, now the trend is trying to integrate everything onto the CPU again.

I say again as the "SSE" and "SSE2" hype was a step in the general graphics/CPU integration direction back in the p3 days. The step was really nice for the folks who couldn't afford anything over a VESA compliant video card.

Of course, any user looking to run Crysis at their max monitor settings probably won't get it from an integrated GPU solution until probably everyone else is running photorealistic games on their 3D displays.

RE: Deja Vu
By Shining Arcanine on 3/24/2008 4:15:44 PM , Rating: 2
The SSE and SSE2 instruction sets are not only useful for graphics. They are also useful for molecular dynamics simulations, which require many operations that can be done in parallel.

By Jayayess1190 on 3/17/2008 10:35:21 PM , Rating: 2
I had read that the codename for the 45nm Penryn chip being used in Nehalem is Gilo and there would be a choice to get it with integrated graphics, or without. What happened to the cpu only version, I have heard no news on it.

By vignyan on 3/17/2008 10:51:19 PM , Rating: 2
Hey Kris and Gab,
Some erratum for your speculation?
1. Gesher is not just 32nm shrink of nehalem but a completely integrated gfx core. And approx 5x performance G45 but still no Larabee gfx in it.

2. Nehalem w/ igfx has a multi-die package with gfx core on the CPU connected closely using quicpath. Do you really think Intel will go ahead and not consider any of the design risks by integrating the full GPU into a CPU? Thats so not Intel. Amd can do that. But latest reports confirm that AMD is also doing a Multi-die package for the late-2009 fusion. Lessons learnt from Phenom. Learn to walk before you run! :)

3. The gfx going with Nehalem initially will be a G45 derived core. But thats not just it. It has some of the very required tunings done for performance optimization (12 EUs, 1EM box per EU and ofcourse higher frequency). Expect Nehalem to go with almost 2.5x performance compared to G45.

4. Please check with your sources again for the G45 capabilities. Its a Dx10 full support including shader model 4.0.

Dont get me wrong. I was thoroughly confused when i first read this article. Please re-check your information. If you are right, you are right... How should i know so much about Intel! ;)

By OddTSi on 3/17/2008 11:50:57 PM , Rating: 2
Can you mount it on a C130H gunship? ;-)

All kidding aside, how long until nVidia makes an offer to buy AMD? I was just saying to a friend of mine, if Larrabee catches on before AMD makes a turnaround, I definitely think AMD is done. Intel has a pretty big chunk of the GPU market thanks to their integrated cores, and if they manage to take even a just-noticeable chunk out of the discrete market as well then nVidia would be able to make a good argument that an nVidia-AMD merger wouldn't become an anti-trust problem.

Or they could always make a play for AMD right now and spin off/sell ATi.

Chip will sell well
By MGSsancho on 3/18/2008 1:58:19 AM , Rating: 2
Look at Suns Cool Threads Processors.

Jhohan on did a nice benchmark of the T1000 server a while back. Looks like Intel is opying the idea.

By MagnumMan on 3/18/2008 11:23:13 AM , Rating: 2
I wonder if they are finally going to extend the number of general purpose registers to 32. AMD should have done this when they introduced the x64 instruction set instead of just 16. Wow, what a concept, then you would have as many as a PowerPC chip made 6 years ago.

NO NO... Intel always bad with GPU.
By RAMDRPC on 4/11/2008 10:37:30 AM , Rating: 2
I have seen Intel's GPU always S**T performance.
The Intel is going make for Hybird GPU I believe it's still S**T performance only on GPU.

Intel is KING of CPU.
Nvidia is KING of GPU.

“We do believe we have a moral responsibility to keep porn off the iPhone.” -- Steve Jobs
Related Articles
AMD Details Next-generation Platforms
December 13, 2007, 10:14 PM

Most Popular Articles5 Cases for iPhone 7 and 7 iPhone Plus
September 18, 2016, 10:08 AM
Laptop or Tablet - Which Do You Prefer?
September 20, 2016, 6:32 AM
Update: Samsung Exchange Program Now in Progress
September 20, 2016, 5:30 AM
Smartphone Screen Protectors – What To Look For
September 21, 2016, 9:33 AM
Walmart may get "Robot Shopping Carts?"
September 17, 2016, 6:01 AM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki