backtop


Print 51 comment(s) - last by aretche.. on Aug 7 at 12:54 AM


Larry Seiler and Stephen Junkins Speak at Intel Larrabee Brief  (Source: News.com)
Intel will begin sampling Larrabee in 2008 with products on market in 2009 or 2010

Today there are three main players in the graphics market producing hardware -- Intel, AMD, and NVIDIA. As the market stands right now, only AMD and NVIDIA manufacture discrete graphics cards with Intel sticking exclusively to on-board graphics that are common on the vast majority of notebook and desktop computers in the low and mid-range market.

Intel is looking to change that and will be bringing its own discrete products to market at some point. The discrete graphics cards from Intel will use the Larrabee architecture and according to eWeek; discrete graphics cards using the Larrabee architecture won’t be available until 2009 or 2010. EWeek does say that Intel will be sampling Larrabee in 2008.

Intel has begun talking about the Larrabee architecture and naturally, it feels that Larrabee is the best architecture out there. What makes Intel so enthused by its architecture is that the Larrabee core is based on the Pentium CPU and uses x86 cores. The use of x86 cores means that programmers and game developers can use the familiar programming languages -- like C and C++ -- that have been in use for a number of years, rather than having to learn a new programming language like NVIDIA's CUDA.

Intel says that its Larrabee is a many-core processor and eWeek reports that it will likely containing ten or more individual x86 processor cores inside the silicon package. Discrete graphics cards using the Larrabee architecture will initially be aimed at the gaming market. That means Intel is directly targeting AMD and NVIDIA with Larrabee.

Intel says Larrabee will support both DirectX and OpenGL APIs and it is encouraging developers to design new and graphic intense applications for the architecture. Larrabee will also bring a new era in parallel computing with developers being able to write applications for it using C and C++ programming languages.

Intel has combined the throughput of a CPU with the parallel programming ability of a GPU. Intel says that Larrabee will also contain vector-processing units to enhance the performance of graphics and video applications. The x86 cores feature short instructional pipelines and can support four execution threads with each core. Each core can also support a register set to help with memory. The short instructional pipeline allows faster access to L1 cache with each core.

Intel says that all cores on Larrabee will share access to a large L2 cache partitioned for each of the cores. The arrangement of the Larrabee architecture allows it to maintain an efficient in-order pipeline, yet allows the processor some benefits of an out-of-order processor to help with parallel applications. Communication between all of the Larrabee cores will be enhanced by using what Intel calls a bidirectional ring network.

Larry Seiler from Intel says, "What the graphics and general data parallel application market needs is an architecture that provides the full programming abilities of a CPU, the full capabilities of a CPU together with the parallelism that is inherent in graphics processors. Larrabee provides [that] and it's a practical solution to the limitations of current graphics processors."

According to News.com, one Intel slide shows that the performance of the Larrabee architecture scales linearly with four cores offering twice the performance of two cores. According to News.com core counts for Larrabee will range from 8 to 48 -- the exact core count for the Larrabee architecture is unknown at this time.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Die Size
By Spectator on 8/4/2008 1:40:23 PM , Rating: 2
From what i have read today it all looks good. Except 1 detail.

TSMC are about to take the advantage in 43 and 40 production fabs. That is the first time intel has been beaten to a smaller footprint in recent memory.

as competition dont need to invest in fabs they will have the lead perhaps.

on the plus side. intel is making 2.5k atoms per wafer atm. and that market may bring some additional revenue.
but regardless. it all looks good for intel. the atom has been designed to be afficient/profitable. and the atom design team are said to be working on the Larabee.

If i had to invest my cash atm. id go with intel.
For the upcomming cpu that looks like +20% speed increase just from the integrated memory controller ignoring Fab changes.... then wait six months and see what state the vid market is in V intels Fab facilities.

Quote "Strength makes all other values possible". :) :P

Amd/Ati team up to do both.. Nvid ownes/owned most of GPU. Intel Rock up to play them both. lol; Before long we could have nvid+amd+ati as one entity to challenge the Intel Supremacy




RE: Die Size
By voodooboy on 8/4/2008 2:23:17 PM , Rating: 2
quote:
on the plus side. intel is making 2.5k atoms per wafer atm. and that market may bring some additional revenue. but regardless. it all looks good for intel. the atom has been designed to be afficient/profitable. and the atom design team are said to be working on the Larabee.


Revenue...probably. Profits? Not so much. Low-cost chips such as the Atom are actually eating into margins. That's the reason why Paul Otellini (sp?) went on record playing down the Atom and AMD has decided to take the wait-and-watch approach to the whole thing for now; which actually seems like a wise decision. The Atom, for sure, is eating into Celeron/Pentium's share...and that's definitely not good for Intel.


RE: Die Size
By Adonlude on 8/5/2008 4:33:06 PM , Rating: 2
quote:
The Atom, for sure, is eating into Celeron/Pentium's share...and that's definitely not good for Intel.

Actually, the Atom is eating into the entire desktop CPU market and that hurts everybody including AMD. It probably hurts Intel a little less since they do get a money for Atom processors.


RE: Die Size
By masher2 (blog) on 8/4/2008 2:26:00 PM , Rating: 3
> "TSMC are about to take the advantage in 43 and 40 production fabs. That is the first time intel has been beaten to a smaller footprint in recent memory."

Intel unveiled 32 nm flash chips several months ago. They typically lead with flash, and allow the process to mature slightly before moving their CPUs to it.


RE: Die Size
By Meph3961 on 8/4/2008 4:06:43 PM , Rating: 2
quote:
Intel unveiled 32 nm flash chips several months ago. They typically lead with flash, and allow the process to mature slightly before moving their CPUs to it.


While you are correct in saying that Intel unveiled a 32nm flash wafer already (they did back in September 2007). Spectator is partially correct in saying that Intel will lose its process lead. AMD is planning on releasing a 40nm GPU in the 1Q of 2009 and will most likely beat Intel's 32nm CPU to the market making it the smallest process chip out for a little while.


RE: Die Size
By masher2 (blog) on 8/4/2008 4:15:09 PM , Rating: 3
If you're talking about chips in general, then no...Intel will be in volume production of 32nm memory before then. If you're talking cpus/gpus, then its a possibility...though given AMD's track record on meeting the announced timetable on past shrinks, its still possible that Westmere (Intel's 32nm CPU) will arrive before AMD's RV870.


RE: Die Size
By ChipDude on 8/4/2008 5:45:28 PM , Rating: 2
Logic and memory development are very seperate at INTEL. Leading in one isn't at all related to when and how fast the other migrates.


RE: Die Size
By Khato on 8/4/2008 6:14:58 PM , Rating: 3
quote:
TSMC are about to take the advantage in 43 and 40 production fabs. That is the first time intel has been beaten to a smaller footprint in recent memory.


40nm is a half-node, and TSMC typically 'beats' Intel for a short period of time due to them. For size concerns, the TSMC 45nm may well be better than Intel's - their reported SRAM sizes are smaller at least. But, that's because TSMC is already using immersion lithography, which markedly increases costs. As well, the Intel process has far better performance characteristics due to the use of a hafnium based gate dielectric compared to TSMC's usage of nitrided oxides.


RE: Die Size
By vignyan on 8/5/2008 1:29:30 PM , Rating: 2
Hmm i dont know about others... but the terminology for nm node is different b/w processors and GPUs. At intel, it seems that the 65nm process will have all the analog logic uses 65nm libs but the core logic uses 45nm libs. The core libs are different than the analog libs...

The GPU shrink for 55nm was when the logic was b/w 45nm and 65nm... and hence the 55nm node definition by them.. Since 40nm is up, they should be having libs b/w 32nm and 48nm! ;)


Countdown...
By zsdersw on 8/4/2008 12:58:47 PM , Rating: 1
.. until the usual parade of responses begin. They will include comments very similar to:

"Intel sucks at graphics.. why would Larrabee be any different?"

"Not gonna be competitive with ATI/Nvidia"

"Drivers are gonna suck"

They're coming.. just you watch.




RE: Countdown...
By ajvitaly on 8/4/2008 1:03:34 PM , Rating: 1
Those comments are coming. But you forgot one other important comment that I will personally be saying from now till release:

it'll be too complicated of a platform to squeeze performance out of ala PS3. It'll probably be a great product that developers will loathe.


RE: Countdown...
By masher2 (blog) on 8/4/2008 1:17:28 PM , Rating: 2
The Core (PS3) was SIMD. Larrabee will be MIMD...much more like traditional programming (and the Xbox 360 for that matter).


RE: Countdown...
By FITCamaro on 8/4/2008 1:27:58 PM , Rating: 2
Not only that but its not like anything special has to be done to program for it. You can program using DX or OGL for graphics or write any other kind of graphics (or non-graphics) in basic C/C++ same as you can for your system processor. You don't have to really learn anything new for this thing other than perhaps multi-threaded coding techniques.


RE: Countdown...
By Some1ne on 8/4/2008 6:05:17 PM , Rating: 5
quote:
"Intel sucks at graphics.. why would Larrabee be any different?"


They currently do. Why would it? How is speculating that Larrabee will be different any better than speculating that it won't be different? At least the people speculating that it won't be different have historical precedent on their side.

quote:
"Not gonna be competitive with ATI/Nvidia"


If you bother to run the numbers, there's justification for that one as well. Each core in the Larrabee architecture can sustain a maximum of 16 FLOPS per clock. The clock rate is expected to be about 2 GHz, and each card is expected to support in the neighborhood of 32 cores. That gives a maximum theoretical throughput of about 1 TFLOPS. Both AMD and Nvidia can already hit that mark with their current high-end cards. By the 2009/2010 timeframe, they will probably have at least doubled that number. So there's more reason to think that it won't be competitive than there is to think that it will be.

quote:
"Drivers are gonna suck"


They very well might, as they usually do when most new products are released. Do you have any evidence to suggest that they won't suck?

In other words, by implying that such comments are invalid/false, you've done nothing different than the people who assert that they are true. Speculation cuts both ways.


RE: Countdown...
By zsdersw on 8/4/2008 9:15:17 PM , Rating: 2
quote:
They currently do. Why would it? How is speculating that Larrabee will be different any better than speculating that it won't be different? At least the people speculating that it won't be different have historical precedent on their side.


Historical precedent is useful until it isn't. I'm not speculating on Larrabee, I'm criticizing the typical speculation that is likely to occur.

quote:
So there's more reason to think that it won't be competitive than there is to think that it will be.


"Competitive" is less a measure of whose got the most TFLOPS than it is who has the most TFLOPS [i]at the least cost[/i].

quote:
Do you have any evidence to suggest that they won't suck?


The people who currently write Intel's IGP drivers are not the people working on Larrabee's drivers.


Hmmm
By voodooboy on 8/4/2008 2:15:31 PM , Rating: 2
Maybe it's just me...but after recently concluding my reading on the Cell processor for class, to me, the Larabee looks like what the Cell should have been. Infact, unlike what's being portrayed by Intel, it seems more like an evolution (and a well thought out/designed one at that!) of the IBM/Tosh/Sony Cell.BE more than anything else. Just that Intel had the x86 IP to build on...which SONY unfortunately didn't have (didn't want to?) and hence the Cell looked like a unique but ugly alien while the Larabee looks more like a regular but potent bombshell.

Where they ARE going to hand the Cell it's butt on a platter would be when it comes to library/SDK support.




RE: Hmmm
By Master Kenobi (blog) on 8/4/2008 3:18:48 PM , Rating: 2
Well, I never did get the idea behind Cell. It has one real general purpose core, but the rest are a bunch of Floating Point units only. If the calculation isnt FP, then the rest of Cell is worthless. Seems lousy since you need to load balance to push what you can do the FP units and anything that cant be structured to go through the general processing unit.

Intel's design seems to be all general purpose units without any specialization involved. This leads to interesting possibilities since you can reprogram the larrabee processing cores to handle any type of calculation. Far superior to CellBE in every way.


RE: Hmmm
By FITCamaro on 8/4/2008 3:46:54 PM , Rating: 2
Agreed. I laugh at people who consider the Cell to be "multi-core". If the Cell is multi-core than a single Intel "Core" core is multi-core since it has special hardware for performing specific tasks as well.


RE: Hmmm
By voodooboy on 8/4/2008 4:20:52 PM , Rating: 2
Yes, that's very true. But you're missing one point, the Cell was primarily built with a specific purpose; powering the Playstation 3 and crunching what may be thrown at it in such a role. The fact that IBM wanted to use the Cell/B.E as a CPU/co-processor in certain application domains is secondary. The Cell/B.E actually does it's job quite well (many orders of magnitude faster than GP processors for what it's designed to do), reason enough for it to be used in what's currently the world's fastest supercomputer, the Roadrunner.

The Larabee on the otherhand goes in the opposite direction. Intel has taken the P54C's architecture (which is a General Purpose Processor to begin with) and beefed up it's vector computation units and provided other elements so as to better handle data-parallel applications.

Also, although very loose in their classification, one was designed to be a CPU (the Cell) and one a GPU (the Larabee).

The idea behind my previous post was not to draw a one to one corelation between the Cell/B.E and the Larabee; it was just to show how the Larabee, as a whole (not just the x86 units) is an architecture with a similar blueprint to the Cell; multiple processing units connected via a high speed ring bus (EIB in case of the Cell) which manipulate data present in their own local memory (LS in case of the Cell).


RE: Hmmm
By KernD on 8/4/2008 7:00:36 PM , Rating: 2
I don't think we should see Cell as a processor developed by IBM for Sony, but as an IBM processor that Sony thought would be good for there "super-computer" console.

The whole design of Cell is clearly aimed at accelerating math processing for computers that IBM makes. It adds plenty of math power, one core to keep them busy and a memory controller. It's pretty much like a small chunk of a super computer, all on a chip.


RE: Hmmm
By zpdixon on 8/5/2008 12:12:52 AM , Rating: 2
quote:

Well, I never did get the idea behind Cell. It has one real general purpose core, but the rest are a bunch of Floating Point units only.

Wrong. SPUs have full support for integer and logical operations. How do I know ? I wrote a crypt() implementation for the Cell in SPU assembly...

I am currently writing code for AMD GPUs (in AMD CAL IL) and I can't wait to see Larrabee. Seems like chip designers are finally understanding why having as many core as possible, even if limited in capabilities, is important for some workloads.


Programming language...
By DanoruX on 8/4/2008 1:28:50 PM , Rating: 5
quote:
Intel has begun talking about the Larrabee architecture and naturally, it feels that Larrabee is the best architecture out there. What makes Intel so enthused by its architecture is that the Larrabee core is based on the Pentium CPU and uses x86 cores. The use of x86 cores means that programmers and game developers can use the familiar programming languages -- like C and C++ -- that have been in use for a number of years, rather than having to learn a new programming language like NVIDIA's CUDA.


1. Cuda is based on C.
2. Computer graphics (today) is mostly done in HLSL and GLSL which are both also based on C.
3. Above languages are really easy to learn.

?




RE: Programming language...
By pauldovi on 8/4/2008 1:43:10 PM , Rating: 1
Tell that to all those Java nut jobs. I love C, but most new programmers will tell you it is too hard to learn / to low level.

C > Java


RE: Programming language...
By Spectator on 8/4/08, Rating: -1
RE: Programming language...
By Master Kenobi (blog) on 8/4/08, Rating: 0
RE: Programming language...
By HsiKai on 8/4/08, Rating: 0
RE: Programming language...
By Some1ne on 8/5/2008 4:20:53 PM , Rating: 2
quote:
C/C++/ C# for life.


Oh come on now. C# is basically just Microsoft's interpretation of Java. If you like C#, then you really have no grounds for complaining about Java. All the features that people typically complain about for making Java "bloated and unruly" are present in C#, including "strong type checking, array bounds checking, detection of attempts to use uninitialized variables, source code portability, and automatic garbage collection". Hell, C# programs even execute on a VM, just like Java apps. None of that comes free, and just because the language has a 'C' in its name doesn't mean that it somehow magically outperforms other languages that provide the same features.

I suspect you would have loved Java if they had just named it "CJava" instead.


RE: Programming language...
By DanoruX on 8/4/2008 5:58:52 PM , Rating: 1
Indeed. There is little more satisfying than writing fast, efficient C++ code. Java's limitations and inefficiencies make me puke, it's a shame that's most of what college kids go through these days.


RE: Programming language...
By Some1ne on 8/4/2008 6:10:04 PM , Rating: 3
You got your inequality backwards. Very backwards.


By iocedmyself on 8/4/2008 9:31:20 PM , Rating: 4
As i already said in the larrabe article comment section, whenever intel has spent years, large sums of money and extened periods of time talking to the media while developing a new technology they fall and fall hard.

the itanium 64, dubbed the itanic was a joint effort between HP and intel starting in 94, planning to launch in 98. It was going to revolutionize the server market, crush anything and everything that it came up against. So certain of success they managed to convince microsoft to code an Itanium OS. But billions of dollars in development and 3 years late, the itanic had total platform sales in the triple digits.

The timna, which was 4 years in development, as the low power sub-$600 system segment was to be the first intel chip with an IMC...built around rambus. After being delayed nearly 2 years and scapping the rambus design in favor of a SDRAM IMC, the end result had a fatal design flaw and they killed the project before it launched.

The pentium 4 and it's tasty netburst. Intel was adament that netburst arch would scale to 10GHZ....it didn't make it past 3.8, used rambus ram, and at 3 Ghz, it was outperformed by a 1.8ghz Pentium 3.

The core 2/quad chips are 32bit chips. They may pay AMD ti yse thier x86-64 code, but intel notoriously slams 64bit applications and OS's since they can't compete as a 64bit chip. Intel sees gains in 64bit apps of 5%-8 or 9% over 32bit, where as AMD sees gains of 18%-25% in 64bit over 32bit.

Intels top of the line $1000-$1600 chips have performance numbers of 40-51.2 gigaflops (10-12.75 gigaflop/core) An 8 core larrabe would hit 102 Gflop/s putting the 48 core@ 612 Gflop/s ...based on the performance of the $1600 3.2ghz QX9775.

Intel's 80 core Terascale chip already tells us that in order to hit 1 teraflop/s they had to run 80 cores at 3.2ghz, to go to 1.8 Tflop, they had to run 80 cores at 5.1ghz and 250w+ power consumption.

Ati can surpass 1 teraflop with one 55nm core using around 100w of power.

By the time intel is ready to launch AMD will already have released 40nm/32nm dual/quad core with on die gGpu. 64bit OS use have increased to 30% up from about 6% in the past 5 months, 64bit is finally going mainstream, and will certainly be the norm by the time intel finishes larrabe, and they can't play in that enviorment.

Intel isn't innovation, their biggest success in the past 10 years is producing the leading 32bit performance chip, which launched 20 years after the first 32bit processor. This is intel marketing hype based on nothinging more than the analogy of "if Gpu's have 20-40 times the processing power of a cpu, then we'll make a 24-48 core cpu and call it a gpu!"




By Some1ne on 8/5/2008 3:44:15 AM , Rating: 3
quote:
Intels top of the line $1000-$1600 chips have performance numbers of 40-51.2 gigaflops (10-12.75 gigaflop/core) An 8 core larrabe would hit 102 Gflop/s putting the 48 core@ 612 Gflop/s ...based on the performance of the $1600 3.2ghz QX9775.

Intel's 80 core Terascale chip already tells us that in order to hit 1 teraflop/s they had to run 80 cores at 3.2ghz, to go to 1.8 Tflop, they had to run 80 cores at 5.1ghz and 250w+ power consumption.


Most of your points are valid, however those two are not. The Larrabee core architecture is completely different from both the Core 2 and the Terascale chip architectures, so it's inaccurate to make a direct comparison between the two.

If you read the article describing the Larrabee architecture, you'll find that each core is capable of sustaining up to 16 FLOPS per clock cycle, and is speculated to be clocked at 2.0 GHz. Given that, Intel can reach a theoretical maximum of 1 TFLOPS using 32 Larrabee cores. 48 cores would give them about 1.5 TFLOPS.


By zsdersw on 8/5/2008 6:25:48 AM , Rating: 1
Actually no.. most of his points are not valid. They're the typical screed of an AMD fanboy.


By iocedmyself on 8/5/2008 8:57:59 AM , Rating: 1
Ah yes, my mistake i thought i had read the claim of 16 flops/clock but have looked at 4 or 5 articles on larrabee and couldn't verify. But really, still not very impressive. 32 cores is certainly better than 48, but it still doesn't compare to 1.

Up to 16 flops/sec leaves a whole lot of wiggle room, it's like those 1x - 16x dvd-r discs, sure you may have a burner that can write at 20x, but with that type of media you often have to endure speeds of 4x up to a peak of 8x.

Theorhetical performance doesn't mean dick, it's the common intel marketing practice that has often times translated into

"well, this one time while running a specialized application that was highly optimized for our hardware platform we acheived that performance, so it's technically true and looks alot better than the average performance of about 1/3 the speed"

when they release the baseline perhaps it will be more impressive, but even then it still falls far short of any kind of milestone.

Anything can look great on paper, and once you have the outline in front of you it's quite easy to start creating theoretical config's, clocks and performance numbers. though i did read something about the forthcoming SIGGRAPH paper quotes performance at

single-threaded performance of one of Larrabee's cores is roughly half that of a "Core 2" core, while the overall performance per watt of a Larrabee chip is 20× better than a Core 2 Duo chip"

Which could translate into anything from 2.5 - 12 Gflop/sec per core.. which is 80 - 384 GFlop/sec on a 32core package in single thread, with power consumption somewhere between 3 and 4 watters per core, which puts the 32 core discrete gpu at 96-128w TDP.

The fact that it won't be based off core 2 arch isn't a strong selling point when it's really the only decent arch they've launched in this century. Especially considering that the larrabee will in fact have x86-64 extensions, which will either be added as a marketing feature that never gets used, or could likely create loads of problems since...it's not intel's code.

It's always possible it could end up being a huge success, though it's about as likely as the bible being revised to include evolution.


By vignyan on 8/6/2008 1:34:28 AM , Rating: 2
First when i started with your post, i thought you liked being pessimistic... but soon realized you were negative! :(

While what you said seem true about the failures of Intel in the past, I want to bring up these points...

Itanium 64: What did you expect? that Intel is a supreme power that can see the future and tell that this will be a success? All they did was try to optimize a processor specifically for server purpose. Ofcourse its not going to be a X86 and hence its a underdog in that market where sun and IBM were quite the kings. But still it did kick all a**es with the existing servers in terms of performance (still not up to sun and IBM but thats also expected to go the Intel way!). So hold on to your comments about the Itanium market that you hardly know! :P

Timna, as you mention was a failure because of multiple reasons. The ones you mentioned are the negative side of it. The more influencing reasons are that it saw that you had to change your processor (4yr cycle) to keep up with your memory tech progression. Take AMD for example... How long after Intel supported DDR2 did AMD come up with a DDR2 compatible processor? Wait.. i think more than 12 months... That's a $40bn hole in intel's pocket if they had gone with Timna and developed its successors. Again, its not possible to predict correctly the DRAM technology 3-4years in advance!

The "infamous" Pentium 4: Lets get one thing very very clear... P4 was a humongous success.. It shipped over 500mn units during its course. And the thing you speak of P3 at 1.8G beating p4 at 3.2G, well its only couple of benchmarks... Most of them, P4 was much better as compared to p3.. Well Intel did admit that it was wrong to go behind the clock. FYI, p4 did run at 10G... well with liquid Nitro cooled.. check on youtube for these OC videos... As a OC fan, you should have seen this! (i guess!)

The 64 bits: You really are that person who actually thinks that Intel is too huge a company that can control all the software developers in the WORLD??? My god... 64 bit applications are not new to server platforms.. not is a 64-bit processor.. Sun had it since loooongg... Anyways, it was a marketing gimmic played by AMD and most of them fell for it. If my mem is not rusted, they had intro's the 64bit processor way back in 2k3. And they launched when they had no support from any of the software vendors. Back then too, AMD said, that this would be a future investment. 5 years past and 64bit OS is coming in to limelight of home and small offices now. So basically AMD tricked all ... And FYI, 64 bit OS requires a min of 4GB ram to perform well.. Another FYI, Intel still rocks in most of the 64-bit benchmarks... gosh... a lot more to type... but pls do your research.. I know i might be sounding like thrashing you and you want to curse me and all... but really have to let go and see the fair side of Intel too...

And about the Gflop comparison of Lbee, some one already corrected you. And about the 80-core, even that was very different archi(yes! intel has lots of money! :))... It was a concept to prove that vector processing is possible, testing of different communication topologies b/w cores... which it will benefit for the processors of the future.

And as for your closing statement, well, that's innovation for you.. there is a very famous saying... "smart people dont do special things. They do things specially."... A 32-bit processor was invented 20years ago. But no other company could come up with a processor like C2D!! Does'nt that strike you?? :O .. Another FYI, Intel EMT64 has been around since AMD64 launched... So almost all the mainstream processors from both were capable of handling 64-bit applications.

And dont be so hasty in concluding that "put together some 32/48 CPUs" and call it a GPU... A lot still depends on the compiler... software Fixed function models... This is a good idea (not for you probably), but Intel does not have to launch two different platforms like AMD/ATI or NVIDIA- one for gfx and other for high-perf computing... This will tend to be one stop solution for all supercomputing... check out some of the advantages of this before you shut it down!

Peace man! >:D


George Clinton and Snoop Dog are Engineers?
By Golgatha on 8/4/2008 1:00:06 PM , Rating: 5
"From the information we have, these vector units could exectue atomic 16-wide ops for a single thread of a running program and can handle register swizzling across all 16 exectution units."

Did George Clinton and Snoop Dog get into engineering?

Snoop Dogg: I gots my 16 mo'bounce to the ounce swizzle fo shizzle my nizzle.

George Clinton: Now just add in my atomic dog, Dogg, and we'll go multi-platinum!

Snoop Dogg: Atomic Doggie style swizzling execution units FTW mutha-$%^er!!!

George Clinton: bow wow wow yippie yo yippie yay all the way to the bank Dogg.




RE: George Clinton and Snoop Dog are Engineers?
By xphile on 8/4/2008 7:52:30 PM , Rating: 3
And in the Get Smart sequel...

Agent 86: Larrabee, for God sake get out of that motherboard...

Pheeew watch that one fly over people's heads.


By Brian23 on 8/4/2008 10:21:44 PM , Rating: 2
AWESOME!

I've always thought about Get Smart when I heard the name Larrabee. I'm glad someone else does too.


Why cannot this be a competitor to Fusion?
By subhajit on 8/4/2008 2:12:54 PM , Rating: 2
If I understand correctly it uses multiple x86 based cores in parallel. So, shouldn't it also run OS like Windows? They can easily keep a few of the cores for general purpose use only.




By Master Kenobi (blog) on 8/4/2008 3:10:42 PM , Rating: 2
No. The cores and associated links are arranged to handle processing the way a graphics processor would process. It would be lousy at regular Windows style processing ala Out of Order Execution. This would however make for an interesting Console chip ala Xbox360.


By FITCamaro on 8/4/2008 3:43:52 PM , Rating: 2
Xbox 720 ;)

Would be quite interesting indeed.


By tehfire on 8/4/2008 3:48:29 PM , Rating: 2
Theoretically, you could run any x86 program, such as windows (not sure if it's x86-64 compatible, however...). As others have mentioned, however, windows may not run very well as it is designed to be neither highly-threaded nor run on an in-order chip. Usually one can hide in-order execution by parallelizing the workload, but OSes don't really run too many things in a parallel fashion.


Yay Intel
By FuzionMonkey on 8/4/2008 12:53:26 PM , Rating: 2
I really hope Larrabee turns out to be competitive with nVIDIA and ATI.

Hopefully it will bring GPU prices down.




RE: Yay Intel
By FITCamaro on 8/4/2008 1:29:36 PM , Rating: 2
Honestly I don't think they can really get much cheaper than they already are.


RE: Yay Intel
By tehfire on 8/4/2008 3:44:59 PM , Rating: 2
Once upon a time we were all outraged that the ATi 9800PRO cost $299. Prices can always be lower.

Then again, 3dfx used to crank out rediculously priced solutions, so it's not a relatively new phenomenon.


What would you expect
By pauldovi on 8/4/08, Rating: 0
RE: What would you expect
By zsdersw on 8/4/2008 1:52:16 PM , Rating: 2
quote:
I don't think it is going to work out too well


.. says an expert, of course.. with a degree from Holy Toledo U.


RE: What would you expect
By UNCjigga on 8/4/2008 2:15:00 PM , Rating: 3
Intel is probably hoping to build on what they learned from the Atom project. They've already proven that they can make smaller x86 cores that are still quite powerful.


dx+ogl
By dome1234 on 8/4/2008 1:08:53 PM , Rating: 2
quote:
Intel says Larrabee will support both DirectX and OpenGL APIs and it is encouraging developers to design new and graphic intense applications for the architecture.


I read somewhere that there'd be software renderer layer on top of dx/opengl to map instructions onto the gpu. How comparable real performance to nvidia/amd's is very much in the air.




RE: dx+ogl
By Penti on 8/6/2008 12:16:10 AM , Rating: 2
It's called the driver. Both AMD/ATi and nVidia need them for talking to the hardware, which don't understand DirectX/3D. Hehe.

Seriously the API is the API, it's the driver that talk to the hardware and the hardware performs the tasks. It's nothing different then any other GPU. Graphic apps aren't compiled for the GPU. As long as all the stuff is done in hardware it will be fine. Its not the talking that would cause any performance drop, it's when the hardware can't do the stuff DX/OGL asks for.


By aretche on 8/7/2008 12:54:32 AM , Rating: 2
Intel has had 45nm since late 2007. TSMC will not have their 40nm tapeouts until March 2009(work at one of the big fabless companies on the latest process). That means commercialization at earliest in August 2009 1.5 years before Intel. Intel will release 32nm in late 2009 as well.




"Well, we didn't have anyone in line that got shot waiting for our system." -- Nintendo of America Vice President Perrin Kaplan

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki