Intel is looking to change that and will be bringing its own discrete
products to market at some point. The discrete graphics cards from Intel will
use the Larrabee architecture and
according to eWeek; discrete graphics cards using the Larrabee architecture won’t be available
until 2009 or 2010. EWeek does say that Intel will be sampling Larrabee in 2008.
Intel has begun talking about the Larrabee
architecture and naturally, it feels that Larrabee
is the best architecture out there. What makes Intel so enthused by its
architecture is that the Larrabee
core is based on the Pentium CPU and uses x86 cores. The use of x86 cores means
that programmers and game developers can use the familiar programming languages
-- like C and C++ -- that have been in use for a number of years, rather
than having to learn a new programming language like NVIDIA's CUDA.
Intel says that its Larrabee is a
many-core processor and eWeek reports that it will likely containing ten
or more individual x86 processor cores inside the silicon package. Discrete
graphics cards using the Larrabee
architecture will initially be aimed
at the gaming market. That means Intel is directly targeting AMD and NVIDIA
Intel says Larrabee will support
both DirectX and OpenGL APIs and it is encouraging developers to design new and
graphic intense applications for the architecture. Larrabee will also bring a new era in parallel computing with
developers being able to write applications for it using C and C++ programming
Intel has combined the throughput of a CPU with the parallel programming
ability of a GPU. Intel says that Larrabee
will also contain vector-processing units to enhance the performance of
graphics and video applications. The x86 cores feature short instructional
pipelines and can support four execution threads with each core. Each core can
also support a register set to help with memory. The short instructional
pipeline allows faster access to L1 cache with each core.
Intel says that all cores on Larrabee
will share access to a large L2 cache partitioned for each of the cores. The
arrangement of the Larrabee
architecture allows it to maintain an efficient in-order pipeline, yet allows
the processor some benefits of an out-of-order processor to help with parallel
applications. Communication between all of the Larrabee cores will be enhanced by using what Intel calls a
bidirectional ring network.
Larry Seiler from Intel says, "What the graphics and general data
parallel application market needs is an architecture that provides the full
programming abilities of a CPU, the full capabilities of a CPU together with
the parallelism that is inherent in graphics processors. Larrabee provides [that] and it's a practical solution to the
limitations of current graphics processors."
According to News.com, one Intel slide shows that the performance of
the Larrabee architecture scales
linearly with four cores offering twice the performance of two cores.
According to News.com core counts for Larrabee will range from 8 to 48 -- the exact core count for the Larrabee architecture is unknown at this
quote: "Intel sucks at graphics.. why would Larrabee be any different?"
quote: "Not gonna be competitive with ATI/Nvidia"
quote: "Drivers are gonna suck"
quote: They currently do. Why would it? How is speculating that Larrabee will be different any better than speculating that it won't be different? At least the people speculating that it won't be different have historical precedent on their side.
quote: So there's more reason to think that it won't be competitive than there is to think that it will be.
quote: Do you have any evidence to suggest that they won't suck?