Print 79 comment(s) - last by someguy123.. on May 7 at 10:27 PM

New GPUs will highlight Intel's 22 nm fourth-gen Core chips

As Intel Corp. (INTCwinds up towards the launch of Haswell Intel's 22 nm node architecture refresh and designated CPU core for fourth-generation Core i-Series processors, it's spilling details on the chips' graphics processing unit.  Haswell cores will be paired with three different tiers of GPUs, depending on the power envelope.

I. Enter a New GPU Brand

The top two tiers of the on-die GPU lineup introduce Intel's first ever branded GPU lineup.  Advanced Micro Devices, Inc. (AMD) has Radeon, NVIDIA Corp. (NVDA) has GeForce, now Intel has announced it will call its high end GPUs "Iris".  In the past it relied on humble numbered die-parts with no separate branding (e.g. last generation's Intel HD 2500 and Intel HD 5000).

Intel had briefly toyed with the idea of releasing Larrabee-branded computation-heavy discrete GPUs.  Ultimately Intel abandoned that project choosing to stick on an embedded path, which took it to Iris.

The new GPUs have been referred to as GT3 in past roadmaps (and shown running Skyrim in demoes).  All of the new chips will pack support for OpenGL 4.0 and DirectX 11.1.

Intel GT3/"Iris" GPU running Skyrim [Image Source: Tiffany Kaiser/DailyTech]

On the lowest end power-wise, Intel's 15 watt Haswell chips (like the one presumably powering the company's 17 mm-thick hybrid ultrabook reference design) will get the HD 5000, a mild upgrade.  The performance increase in this segment is expected to be around 50 percent (over the HD 2500).

Intel Iris Intel graphics
[Click to enlarge]I

AMD's Fusion efforts were ultimately a wakeup call to Intel on the value of a high-quality embeddded GPU.  But it appears that the student has now become the master; the performance of Iris and Iris Pro come closer to matching a discrete GPU than AMD's Fusion chips have thus far.

II. Discrete Graphics Performance in an Embedded Package

Things will start to heat up in the U-Series (like Core i5 branded) 28W mobile CPUs, which will get the new "Iris" branded GPU unit (the full name is Intel Iris HD 5100).  It's roughly 2x faster than the HD 4000.
Intel Iris

The Iris Pro gets a special boost -- new dedicated EDRAM (embedded dynamic random access memory) is also now for the first time included with the GPU part of the die.  On high-end laptop chips -- the Core i7 branded H-series of mobile chips (47-55W of typical power) -- this is expected to again represent about a 2x speedup.
Intel Iris Pro

On the desktop side, Intel's GT3e "Iris Pro" part will get an even bigger boost, reaching a 3x speedup in the R-Series (65-84W) desktop chips.  The M-series laptop and K-Series desktop chips are expected to also have access to Iris Pro, although Intel hasn't revealed their exact level of performance increase.

Intel Iris Pro 2

An Ivy Bridge i7-3770K part scored around 1560 in 3DMark 2011 [source], thus the new Iris Pro-equipped chips should be scoring over 3000, if Intel's performance claims are accurate.  That indicates that the Intel's on-die graphics will be slightly better than a full discrete AMD Radeon HD 7750 GPU which scores around 2900 [source].  
Radeon 7750 HD
The IrisPro on-die GPU is approximately as powerful as last generation's Radeon HD 7750.
[Image Source: AMD]

Whether the real-world performance truly lives up to that remains to be seen, but it's clear that this is Intel's most powerful GPU ever, and worthy of its first-ever branding.

[All slides are courtesy of Intel, via AnandTech]

Sources: Intel via AnandTech, Intel

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: Booo
By deathwombat on 5/2/2013 3:24:20 PM , Rating: 2
It's still stagnant on the CPU side.

I'm from the era when a computer was utterly obsolete after 5 years. In rougly 5 year intervals I upgraded from a Z80 4MHz to a 286-20 to a 486DX 33 to a Pentium II 350 to an Athlon XP 2400+ to a Core 2 Quad 6600. There was never anything wrong with the computers that I upgraded from; they did everything that I wanted them to do. I wanted more performance because it was available, and I found ways to use it once I had it.

So, I'd been using my Core 2 Quad since 2006 and, by 2012, I figured it was time for an upgrade. I knew that the base clock speed on a Core i7 3770 was only 41% faster than what I already had, but I figured that with the addition of HyperThreading, TurboBoost, new instructions like AVX, and improvements to performance per clock, a new computer would crush my Core 2 Quad in the CPU-based distributed computing projects I like to run.

Well, HyperThreading didn't help and usually hurt performance, so I turned that off. In the end, my new computer outperforms my old one by an average of 50% in my DC projects... exactly the same amount that the clock speed increased by when TurboBoost is running on all four cores. 6 years -- more than half a decade! -- and only 50%. That would be good in any other industry, but I'm used to 1000%. Had I known the performance boost would be that small, I wouldn't have wasted my money. A 7-year-old computer is perfectly good today. In fact, by 2016, I doubt that my decade-old Core 2 Quad 6600 will need to be replaced. My wife loves it!

So, now that we're in an age where clock speeds only creep upwards and performance per clock is essentially stagnant, what can be done to entice someone to buy a new computer? Well, Moore's Law still lets Intel double the number of transistors every two years. My DC projects all scale perfectly as the number of cores increase. If I had 4 cores in 2006, new CPUs could have had 8 cores in 2008, 16 cores in 2010, and 32 cores in 2012. Instead, Intel's mainstream chips have stayed at 4 cores, and they've given half the die to a graphics chip that I don't even use!

If Iris is as good as Intel claims, maybe Intel's integrated graphics will eventually be good enough that I won't bother with a discrete graphics card. In the meantime, the stagnation that the OP complained about has left me with no reason to upgrade my now 7-year old computer, and made me a sucker for doing it. Barring some fundamental change like 10+ GHz CPUs made possible by graphene or diamonds, I highly suspect that my Core i7 3770 is the last CPU I'll ever buy. I can't do anything with it that I couldn't do with my computer from 2006, so if a CPU from 2006 is still competitive, between the decline in PC sales, the lack of competition for the performance crown, and the threats to Moore's Law posed by the laws of physics and Moore's Second Law (aka Rock's Law), will there ever be a desktop CPU that can render an i7 3770 obsolete?

RE: Booo
By retrospooty on 5/2/2013 3:36:01 PM , Rating: 2
That's the thing though... If your new CPU was 10x faster than it is now, it still wouldn't operate much faster at all. Almost nothing you do is bottle-necked at the CPU. Open up your task manager and watch it for a while. Your CPU's are probably averaging 1 to 5% utilization most of the day. Faster CPU's arent needed unless some software comes out to utilize what we have now.

RE: Booo
By deathwombat on 5/2/2013 4:19:54 PM , Rating: 3
My CPUs are at 100% all the time. (Or at least they were. I'm taking a break from distributed computing projects at the moment.) There are still applications that will never have enough power, like DC projects that simulate protein folding, find prime numbers, crack encrypted messages, or search for extraterrestrial life. Chess engines will never fail to benefit from being able to search more moves per second.

But yes, there is no need for more power to do most of what we do now. As I pointed out above, when more power became available, new technologies came along to take advantage of it. What would we do if we suddenly had 10 or 100 GHz CPUs? I don't know, but someone would find a use for it, and it might forever change the way we use computers. If you build it, they will come.

RE: Booo
By retrospooty on 5/2/2013 5:38:12 PM , Rating: 2
LOL... True, but the thing about distributed computing is you are basically giving out your idle/spare CPU processing power to whatever cause... It works because there SOOOOOO much free CPU power already. OTher than that and a few other niche things that a very small percentage of people do most CPU's on PC's are less than 5% utilized, way less.

RE: Booo
By Motoman on 5/3/2013 3:57:21 PM , Rating: 2
Well...I learned the hard way about the "free" CPU power you're talking about.

This is a long time ago, mind you...but back in the day I was all about SETI@Home. I had all kinds of PCs running all the time, doing the SETI command-line client. I was in something silly like the top 0.2% of all SETI users worldwide in terms of workunits done (out of something ridiculous like a hundred million users).

Then I had a thought. And for a month, I stopped running all my PCs with SETI all the time. When my electricity bill came for that month, it was about $100 lower than it usually was.

So then I stopped running SETI XD

RE: Booo
By Jeffk464 on 5/2/2013 5:34:12 PM , Rating: 2
so if a CPU from 2006 is still competitive, between the decline in PC sales, the lack of competition for the performance crown, and the threats to Moore's Law posed by the laws of physics and Moore's Second Law (aka Rock's Law), will there ever be a desktop CPU that can render an i7 3770 obsolete?

Pretty much the cpu world has become pretty yawn worthy. All of the current excitement and big improvements are in phone and tablet computing. It still takes a ridiculous amount of money in graphics cards to max out my core i5 Sandy Bridge.

RE: Booo
By ShieTar on 5/3/2013 7:24:45 AM , Rating: 2
Why are you complaining that the 2012 300$-CPU only outperforms your 2007 (Core 2 Quad wasn't even released in 2006) by 40% in one specific, badly optimized software? I mean, first of, the 3770K outperforms a Q6600 by an easy 100% in most benchmark tasks, see .
Second of all, you should compare the 1k$-Release CPUs from both years, thus with a 200% performance boost in Multithreaded applications.

And sure, the Q6600 outperformed a P3 600MHz from the year 2000 by a factor of 25 instead of the factor of 3 you got now, but it also incread the TDP by a factor of 6 to 7.

Oh, and here is a tip: If you do not care about the GPU part, don't buy it. You could have gotten the CPU-Performance of the 3770K without the GPU-part as a Xeon 1230v2, for 2/3rds of the price-tag.

RE: Booo
By ShieTar on 5/3/2013 7:27:28 AM , Rating: 2
Sorry, the 2nd link should have been different from the first one of course:

"We basically took a look at this situation and said, this is bullshit." -- Newegg Chief Legal Officer Lee Cheng's take on patent troll Soverain

Latest Headlines
Inspiron Laptops & 2-in-1 PCs
September 25, 2016, 9:00 AM
The Samsung Galaxy S7
September 14, 2016, 6:00 AM
Apple Watch 2 – Coming September 7th
September 3, 2016, 6:30 AM
Apple says “See you on the 7th.”
September 1, 2016, 6:30 AM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki