Print 50 comment(s) - last by surt.. on Mar 15 at 2:37 PM

Intel says parallel software is more important for many-core CPUs like "Larrabee"

Multi-core processors have been in the consumer market for several years now. However, despite having access to CPUs with two, three, four, and more cores, there are still relatively few applications available that can take advantage of multiple cores. Intel is hoping to change that and is urging developers of software to think parallel.

Intel director and chief evangelist for software development products talked about thinking parallel in a keynote speech he delivered at the SD West conference recently. James Reinders said, "One of the phrases I've used in some talks is, it's time for us as software developers to really figure out how to think parallel." He also says that the developer who doesn’t think parallel will see their career options limited.

Reinders gave the attendees eight rules for thinking parallel from a paper he published in 2007 reports ComputerWorld. The eight rules include -- Think parallel; program using abstraction; program tasks, not threads; design with the option of turning off concurrency; avoid locks when possible; use tools and libraries designed to help with concurrency; use scalable memory; and design to scale through increased workloads.

He says that after half a decade of shipping multi-core CPUs, Intel is still struggling with how to use the available cores. The chipmaker is under increasing pressure from NVIDIA who is leveraging a network of developers to program parallel applications to run on its family of GPUs. NVIDIA and Intel are embroiled in a battle to determine if the GPU or CPU will be the heart of future computer systems.

Programming for processors with 16 or 32 cores takes a different approach according to Reinders. He said, "It's very important to make sure, if at all possible, that your program can run in a single thread with concurrency off. You shouldn't design your program so it has to have parallelism. It makes it much more difficult to debug."

Reinders talked about the Intel Parallel Studio tool kit in the speech, a tool kit for developing parallel applications in C/C++, which is currently in its beta release. Reinders added, "The idea here [with] this project was to add parallelism support to [Microsoft's] Visual Studio in a big way."

Intel says that it plans to offer the parallel development kit to Linux programmers this year or early next year. The CPU Reinders is talking about when he says many-core is the Larrabee processor. Intel provided some details on Larrabee in August of 2008.

One of the key features of Larrabee is that it will be the heart of a line of discrete graphics cards, a market Intel has not participated in. Larrabee is said to contain ten of more cores inside the discrete package. If Larrabee comes to be in the form Intel talked about last year it will be competing directly against NVIDIA and ATI in the discrete graphics market.

NVIDIA is also rumored to be eyeing an entry into the x86 market as well. Larrabee will be programmable in the C/C++ languages, just as NVIDIA's GPUs are via the firms CUDA architecture.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: What am I missing here?
By ncage on 3/11/2009 7:38:19 PM , Rating: 3
Your confusing async calls and parallizing code which are totally different things. Your talking about giving UI control back to the user rather than locking the UI up when they are doing something....say for example a long database operation. This is done with async callback delegate so that the blocked statement (database call in this case) will notify you when its finished. Your NOT increasing the efficiency or speed of whatever your trying to do. If you executed the same code but sync (only on one thread) then the UI would lock up but it wouldn't execute any faster.

Parallelizing code is another animal in itself. Say for example you return 100 records from database and you need to do the same operation on each of the 100 records and none of the operations are dependent on any of the other operations then you could create possibly 4 threads and process the records in 1/4 time (this is a perfect case senario and nothing is ever perfect). Parallelizing code is HARD. Its not even close to easy. In the above case it was easy because there was no dependencies and each of the 100 rows was not dependent on another row. In real life this usually doesn't happen. There are dependencies. There are locks....whether it be file locals, global variables, database...ect ect. This is why you get bugs will only happen once in 1,000,000,000 runs through the code and are almost impossible to find

Threads are EASY to create in .Net but paralizing code safetly and correctly is no where near easy. .Net is not thread safe and it will never be unless you want to go back in time and recreate apartment threading like VB6 had. Microsoft is doing research to try to make some paralizing easier (PLINQ (Parallel Language integrated Query)) but currently its hard and like another poster commented...there is a lot of code that i serial in nature and can't be parallelized. Only small sections will lend themself to being parallized. If intel wants people to use all the cores then they will have to produce better compilers/tools to help developers with this.

RE: What am I missing here?
By Dribble on 3/12/2009 5:51:09 AM , Rating: 2
Not strictly true - paralysing so your ui runs in one thread, and various other bits run in other threads is pretty easy as long as you have some libraries to allow you to do the communication and access data in a thread safe and efficient way.

I find the biggest limitation is memory - a lot of operations effectively require you to go up and down through memory looking for stuff and then producing a list of results. It's not particularly cpu intensive but it does require a lot of memory access (e.g. rendering once you've done the cull basically involves looking at your data set and passing a bunch of triangles to the gpu - that is memory intensive more then cpu intensive).

Hence it's not worth trying to multi-thread many of those operations as cpu processing speed is not the limitation, equally moving work to the gpu won't help - it'll just make it slower as you have to push all the data int gpu memory before you can do the operation.

“Then they pop up and say ‘Hello, surprise! Give us your money or we will shut you down!' Screw them. Seriously, screw them. You can quote me on that.” -- Newegg Chief Legal Officer Lee Cheng referencing patent trolls
Related Articles
Intel Talks Details on Larrabee
August 4, 2008, 12:46 PM

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki