backtop


Print 51 comment(s) - last by aretche.. on Aug 7 at 12:54 AM


Larry Seiler and Stephen Junkins Speak at Intel Larrabee Brief  (Source: News.com)
Intel will begin sampling Larrabee in 2008 with products on market in 2009 or 2010

Today there are three main players in the graphics market producing hardware -- Intel, AMD, and NVIDIA. As the market stands right now, only AMD and NVIDIA manufacture discrete graphics cards with Intel sticking exclusively to on-board graphics that are common on the vast majority of notebook and desktop computers in the low and mid-range market.

Intel is looking to change that and will be bringing its own discrete products to market at some point. The discrete graphics cards from Intel will use the Larrabee architecture and according to eWeek; discrete graphics cards using the Larrabee architecture won’t be available until 2009 or 2010. EWeek does say that Intel will be sampling Larrabee in 2008.

Intel has begun talking about the Larrabee architecture and naturally, it feels that Larrabee is the best architecture out there. What makes Intel so enthused by its architecture is that the Larrabee core is based on the Pentium CPU and uses x86 cores. The use of x86 cores means that programmers and game developers can use the familiar programming languages -- like C and C++ -- that have been in use for a number of years, rather than having to learn a new programming language like NVIDIA's CUDA.

Intel says that its Larrabee is a many-core processor and eWeek reports that it will likely containing ten or more individual x86 processor cores inside the silicon package. Discrete graphics cards using the Larrabee architecture will initially be aimed at the gaming market. That means Intel is directly targeting AMD and NVIDIA with Larrabee.

Intel says Larrabee will support both DirectX and OpenGL APIs and it is encouraging developers to design new and graphic intense applications for the architecture. Larrabee will also bring a new era in parallel computing with developers being able to write applications for it using C and C++ programming languages.

Intel has combined the throughput of a CPU with the parallel programming ability of a GPU. Intel says that Larrabee will also contain vector-processing units to enhance the performance of graphics and video applications. The x86 cores feature short instructional pipelines and can support four execution threads with each core. Each core can also support a register set to help with memory. The short instructional pipeline allows faster access to L1 cache with each core.

Intel says that all cores on Larrabee will share access to a large L2 cache partitioned for each of the cores. The arrangement of the Larrabee architecture allows it to maintain an efficient in-order pipeline, yet allows the processor some benefits of an out-of-order processor to help with parallel applications. Communication between all of the Larrabee cores will be enhanced by using what Intel calls a bidirectional ring network.

Larry Seiler from Intel says, "What the graphics and general data parallel application market needs is an architecture that provides the full programming abilities of a CPU, the full capabilities of a CPU together with the parallelism that is inherent in graphics processors. Larrabee provides [that] and it's a practical solution to the limitations of current graphics processors."

According to News.com, one Intel slide shows that the performance of the Larrabee architecture scales linearly with four cores offering twice the performance of two cores. According to News.com core counts for Larrabee will range from 8 to 48 -- the exact core count for the Larrabee architecture is unknown at this time.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By iocedmyself on 8/4/2008 9:31:20 PM , Rating: 4
As i already said in the larrabe article comment section, whenever intel has spent years, large sums of money and extened periods of time talking to the media while developing a new technology they fall and fall hard.

the itanium 64, dubbed the itanic was a joint effort between HP and intel starting in 94, planning to launch in 98. It was going to revolutionize the server market, crush anything and everything that it came up against. So certain of success they managed to convince microsoft to code an Itanium OS. But billions of dollars in development and 3 years late, the itanic had total platform sales in the triple digits.

The timna, which was 4 years in development, as the low power sub-$600 system segment was to be the first intel chip with an IMC...built around rambus. After being delayed nearly 2 years and scapping the rambus design in favor of a SDRAM IMC, the end result had a fatal design flaw and they killed the project before it launched.

The pentium 4 and it's tasty netburst. Intel was adament that netburst arch would scale to 10GHZ....it didn't make it past 3.8, used rambus ram, and at 3 Ghz, it was outperformed by a 1.8ghz Pentium 3.

The core 2/quad chips are 32bit chips. They may pay AMD ti yse thier x86-64 code, but intel notoriously slams 64bit applications and OS's since they can't compete as a 64bit chip. Intel sees gains in 64bit apps of 5%-8 or 9% over 32bit, where as AMD sees gains of 18%-25% in 64bit over 32bit.

Intels top of the line $1000-$1600 chips have performance numbers of 40-51.2 gigaflops (10-12.75 gigaflop/core) An 8 core larrabe would hit 102 Gflop/s putting the 48 core@ 612 Gflop/s ...based on the performance of the $1600 3.2ghz QX9775.

Intel's 80 core Terascale chip already tells us that in order to hit 1 teraflop/s they had to run 80 cores at 3.2ghz, to go to 1.8 Tflop, they had to run 80 cores at 5.1ghz and 250w+ power consumption.

Ati can surpass 1 teraflop with one 55nm core using around 100w of power.

By the time intel is ready to launch AMD will already have released 40nm/32nm dual/quad core with on die gGpu. 64bit OS use have increased to 30% up from about 6% in the past 5 months, 64bit is finally going mainstream, and will certainly be the norm by the time intel finishes larrabe, and they can't play in that enviorment.

Intel isn't innovation, their biggest success in the past 10 years is producing the leading 32bit performance chip, which launched 20 years after the first 32bit processor. This is intel marketing hype based on nothinging more than the analogy of "if Gpu's have 20-40 times the processing power of a cpu, then we'll make a 24-48 core cpu and call it a gpu!"




By Some1ne on 8/5/2008 3:44:15 AM , Rating: 3
quote:
Intels top of the line $1000-$1600 chips have performance numbers of 40-51.2 gigaflops (10-12.75 gigaflop/core) An 8 core larrabe would hit 102 Gflop/s putting the 48 core@ 612 Gflop/s ...based on the performance of the $1600 3.2ghz QX9775.

Intel's 80 core Terascale chip already tells us that in order to hit 1 teraflop/s they had to run 80 cores at 3.2ghz, to go to 1.8 Tflop, they had to run 80 cores at 5.1ghz and 250w+ power consumption.


Most of your points are valid, however those two are not. The Larrabee core architecture is completely different from both the Core 2 and the Terascale chip architectures, so it's inaccurate to make a direct comparison between the two.

If you read the article describing the Larrabee architecture, you'll find that each core is capable of sustaining up to 16 FLOPS per clock cycle, and is speculated to be clocked at 2.0 GHz. Given that, Intel can reach a theoretical maximum of 1 TFLOPS using 32 Larrabee cores. 48 cores would give them about 1.5 TFLOPS.


By zsdersw on 8/5/2008 6:25:48 AM , Rating: 1
Actually no.. most of his points are not valid. They're the typical screed of an AMD fanboy.


By iocedmyself on 8/5/2008 8:57:59 AM , Rating: 1
Ah yes, my mistake i thought i had read the claim of 16 flops/clock but have looked at 4 or 5 articles on larrabee and couldn't verify. But really, still not very impressive. 32 cores is certainly better than 48, but it still doesn't compare to 1.

Up to 16 flops/sec leaves a whole lot of wiggle room, it's like those 1x - 16x dvd-r discs, sure you may have a burner that can write at 20x, but with that type of media you often have to endure speeds of 4x up to a peak of 8x.

Theorhetical performance doesn't mean dick, it's the common intel marketing practice that has often times translated into

"well, this one time while running a specialized application that was highly optimized for our hardware platform we acheived that performance, so it's technically true and looks alot better than the average performance of about 1/3 the speed"

when they release the baseline perhaps it will be more impressive, but even then it still falls far short of any kind of milestone.

Anything can look great on paper, and once you have the outline in front of you it's quite easy to start creating theoretical config's, clocks and performance numbers. though i did read something about the forthcoming SIGGRAPH paper quotes performance at

single-threaded performance of one of Larrabee's cores is roughly half that of a "Core 2" core, while the overall performance per watt of a Larrabee chip is 20× better than a Core 2 Duo chip"

Which could translate into anything from 2.5 - 12 Gflop/sec per core.. which is 80 - 384 GFlop/sec on a 32core package in single thread, with power consumption somewhere between 3 and 4 watters per core, which puts the 32 core discrete gpu at 96-128w TDP.

The fact that it won't be based off core 2 arch isn't a strong selling point when it's really the only decent arch they've launched in this century. Especially considering that the larrabee will in fact have x86-64 extensions, which will either be added as a marketing feature that never gets used, or could likely create loads of problems since...it's not intel's code.

It's always possible it could end up being a huge success, though it's about as likely as the bible being revised to include evolution.


By vignyan on 8/6/2008 1:34:28 AM , Rating: 2
First when i started with your post, i thought you liked being pessimistic... but soon realized you were negative! :(

While what you said seem true about the failures of Intel in the past, I want to bring up these points...

Itanium 64: What did you expect? that Intel is a supreme power that can see the future and tell that this will be a success? All they did was try to optimize a processor specifically for server purpose. Ofcourse its not going to be a X86 and hence its a underdog in that market where sun and IBM were quite the kings. But still it did kick all a**es with the existing servers in terms of performance (still not up to sun and IBM but thats also expected to go the Intel way!). So hold on to your comments about the Itanium market that you hardly know! :P

Timna, as you mention was a failure because of multiple reasons. The ones you mentioned are the negative side of it. The more influencing reasons are that it saw that you had to change your processor (4yr cycle) to keep up with your memory tech progression. Take AMD for example... How long after Intel supported DDR2 did AMD come up with a DDR2 compatible processor? Wait.. i think more than 12 months... That's a $40bn hole in intel's pocket if they had gone with Timna and developed its successors. Again, its not possible to predict correctly the DRAM technology 3-4years in advance!

The "infamous" Pentium 4: Lets get one thing very very clear... P4 was a humongous success.. It shipped over 500mn units during its course. And the thing you speak of P3 at 1.8G beating p4 at 3.2G, well its only couple of benchmarks... Most of them, P4 was much better as compared to p3.. Well Intel did admit that it was wrong to go behind the clock. FYI, p4 did run at 10G... well with liquid Nitro cooled.. check on youtube for these OC videos... As a OC fan, you should have seen this! (i guess!)

The 64 bits: You really are that person who actually thinks that Intel is too huge a company that can control all the software developers in the WORLD??? My god... 64 bit applications are not new to server platforms.. not is a 64-bit processor.. Sun had it since loooongg... Anyways, it was a marketing gimmic played by AMD and most of them fell for it. If my mem is not rusted, they had intro's the 64bit processor way back in 2k3. And they launched when they had no support from any of the software vendors. Back then too, AMD said, that this would be a future investment. 5 years past and 64bit OS is coming in to limelight of home and small offices now. So basically AMD tricked all ... And FYI, 64 bit OS requires a min of 4GB ram to perform well.. Another FYI, Intel still rocks in most of the 64-bit benchmarks... gosh... a lot more to type... but pls do your research.. I know i might be sounding like thrashing you and you want to curse me and all... but really have to let go and see the fair side of Intel too...

And about the Gflop comparison of Lbee, some one already corrected you. And about the 80-core, even that was very different archi(yes! intel has lots of money! :))... It was a concept to prove that vector processing is possible, testing of different communication topologies b/w cores... which it will benefit for the processors of the future.

And as for your closing statement, well, that's innovation for you.. there is a very famous saying... "smart people dont do special things. They do things specially."... A 32-bit processor was invented 20years ago. But no other company could come up with a processor like C2D!! Does'nt that strike you?? :O .. Another FYI, Intel EMT64 has been around since AMD64 launched... So almost all the mainstream processors from both were capable of handling 64-bit applications.

And dont be so hasty in concluding that "put together some 32/48 CPUs" and call it a GPU... A lot still depends on the compiler... software Fixed function models... This is a good idea (not for you probably), but Intel does not have to launch two different platforms like AMD/ATI or NVIDIA- one for gfx and other for high-perf computing... This will tend to be one stop solution for all supercomputing... check out some of the advantages of this before you shut it down!

Peace man! >:D


“So far we have not seen a single Android device that does not infringe on our patents." -- Microsoft General Counsel Brad Smith

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki