backtop


Print 65 comment(s) - last by M4gery.. on Jun 30 at 12:57 PM


Haswell CPUs will contain vector processors and a more power on-die GPU. The chips are designed to power the next generation of "Ultrabooks".  (Source: ASUSTek)

An Intel corporate blog post seemed to confirm both the presence of vector coprocessor silicon and a 2013 release date for the 22 nm Haswell.  (Source: Intel)
Company looks to new 22 nm architecture to hold off AMD and ARM Holdings

Intel Corp. (INTC) has dropped a few hints to its upcoming 22 nm Haswell architecture, currently under development by the company's secret Oregon team.  In a post on the Intel Software Network blog titled "Haswell New Instruction Descriptions Now Available!", the company reveals that it plans to launch the new CPU in 2013.

Haswell will utilize the same power-saving tri-gate 3D transistor technology that will first drop with Ivy Bridge in early 2012.  Major changes architecturally reportedly include a totally redesigned cache, fused multiply add (FMA3) instruction support, and an on-chip vector coprocessor.

The vector process, which will work with the on-die GPU, was a major focus of the post.  The company is preparing a series of commands called Advanced Vector Extensions (AVX), which will speed up vector math.  It writes:

Intel AVX addresses the continued need for vector floating-point performance in mainstream scientific and engineering numerical applications, visual processing, recognition, data-mining/synthesis, gaming, physics, cryptography and other areas of applications. Intel AVX is designed to facilitate efficient implementation by wide spectrum of software architectures of varying degrees of thread parallelism, and data vector lengths.

According to CNET, Intel's marketing chief Tom Kilroy indicates that Intel hopes for the new chip's integrated graphics to rival today's discrete graphics.  

Intel has a ways to go to meet that objective -- its on-die GPU in Sandy Bridge marked a significant improvement over past designs (which were housed in a separate package, traditionally), however it also fell far short of the GPU found in Advance Micro Devices (AMD) Llano Fusion APUs.

Intel has enjoyed a love/hate relationship with graphics makers AMD and NVIDIA Corp. (NVDA).  While it's been forced to allow their GPUs to live on its motherboards and alongside its CPUs, the company has also fantasized of usurping the graphics veterans.  Those plans culminated in the company's Larrabee project, which aimed to offer discrete Intel graphics cards.

Now that a commercial release of Larrabee has been cancelled, Intel has seized upon on-die integrated graphics as its latest answer to try to push NVIDIA and AMD out of the market.  Intel is promoting heavily the concept of ultrabooks -- slender notebooks like the Apple, Inc.'s (AAPL) MacBook Air or ASUTEK Computer Inc.'s (TPE:2357) UX21, which feature low voltage CPUs and -- often -- no discrete GPU.

Mr. Kilroy reportedly wants ultrabook manufacturers using Haswell to shoot for target and MSRP of $599 USD, which would put them roughly in line with this year's Llano notebooks from AMD and partners.  It's about $100 USD less than current Sandy Bridge notebooks run.

Intel faces pressure from a surging ARM Holdings plc's (ARMH) who is looking to unveil notebook processors sometime next year.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

The focus
By Jaybus on 6/24/2011 2:33:52 PM , Rating: 2
Having a faster GPU on chip is not the focus. The point is treating it like a coprocessor. Long ago, this was the path floating point processors took. They started out as separate chips on separate sockets. Data had to be shipped into and out of both processors over a bus. Eventually, it was moved on die and integrated into the CPU core, the control unit shipping instructions from the same (cached) instruction queue to either the fp or integer unit at equal cost.

Now we are finally seeing the beginnings of the integration of a vector floating point unit, (not counting the very limited SIMD unit). It is a huge difference, and is why the vector unit was focused on. The key differences are that the data doesn't have to be shipped over the PCI-E bus, but over far faster on die and RAM memory channels. Program code doesn't have to be shipped to a separate memory space for the separate instruction queue of a GPU card/chip, but is inline in the same cached instruction queue. It is a paradigm shift, because it makes developing compilers and software to utilizes vector processing much, much simpler.




RE: The focus
By xyzCoder on 6/24/2011 9:46:12 PM , Rating: 2
"The key differences are that the data doesn't have to be shipped over the PCI-E bus, but over far faster on die and RAM memory channels."

You are comparing an integrated solution with a non-integrated solution. Compare Intel's supposedly brilliant 'vector floating point unit' against AMD's latest integrated offerings and they are similar but AMD/NVIDIA come out ahead in part because theirs support frameworks like OpenCL.

And unless code widely gets compiled specifically to use these instructions, they are going to end up as wasted space on the silicon of 99% of customers, even though specific benchmarks maybe give great results.


"Game reviewers fought each other to write the most glowing coverage possible for the powerhouse Sony, MS systems. Reviewers flipped coins to see who would review the Nintendo Wii. The losers got stuck with the job." -- Andy Marken














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki