backtop


Print 43 comment(s) - last by gfxBill.. on Sep 30 at 8:58 PM


Microsoft's Barrelfish operating system is an experimental OS looking to bring improved multicore performance to Microsoft's OS's  (Source: Network World)

"Barrelfish hackers and hangers-on, Zurich, August 2009 "  (Source: Microsoft/ETH Zurich)
Microsoft tests out multi-core improvements that will eventually be rolled into Windows

Microsoft has long cooked up new and experimental operating systems whose features eventually got rolled into its central Windows offerings.  Most recently it's been dabbling with Singularity, an experimental OS designed for increased reliability thanks to kernel, device drivers, and applications being written in managed SING# (an extension of C#) code.  Another test OS is Midori (not to be confused with the web browser), an OS that sandboxes applications for security and is designed for running concurrent applications, a feature geared towards cloud computing schemes.

Other recent efforts include its Windows Azure OS, a cloud computing OS currently offered for free to developers.

Now Microsoft has unveiled another new OS prototype codenamed "Barrelfish".  Barrelfish is an OS optimized to run on multi-core machines.  Namely, Barrelfish uses message passing and a database like system to pass information between cores.  Typically OS's use share memory schemes, which become very inefficient when resource demands are high.

The new OS was jointly created by ETH Zurich, a technology university, and Microsoft Research, located in Cambridge, Mass. 

Interestingly, it uses some open source BSD third-part libraries, which are "covered by various BSD-like open source licenses."  This has led to speculation that the new OS may be free and open source, not terms you typically would associate with Microsoft.

According to developers who have attended conferences on the new OS, it reportedly brings some of the Midori/Singularity sandboxing protections onboard.  Additionally, applications reportedly have an alternate route of accessing information from devices like graphics or sound cards.  A large deal of device information is reportedly stored in a central database that can be queried.

Writes developer "AudriUSA", "... instead of fully isolating program from device via driver, Barrelfish has a kind of database where lots of low level information about the hardware can be found. The kernel is single threaded and non preemptive. Scheduling is coupled with the message passing, an arrival of the message simply activates the waiting thread. It also uses a little bit of the microkernel concepts, running drivers in protected space, like L4 and in general pushing a lot into application domains."

As Intel and AMD expand their 4, 6, and 8-core lineups and approach even higher core counts, using these resources efficiently will be a crucial operating system responsibility.  It will be exciting to see what kind of improvements that Microsoft can accomplish with Barrelfish, as these improvements will surely be rolled into successors to Windows 7.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

OS vs Applications?
By sparkuss on 9/28/2009 10:24:25 AM , Rating: 3
Okay total noob on this but does this allow the OS to do anything that the un-threaded application can't/won't do?

Or is this allowing the OS to run applications faster even if the application isn't coded for multi-core.

I keep reading how few applications are using multi-core now and that we are wasting processing power, does this answer it from the OS side?




RE: OS vs Applications?
By namechamps on 9/28/2009 10:28:13 AM , Rating: 5
A single threaded application can never by itself benefit from multiple cores.

However even if you run nothing but single threaded apps you likely are running 2 or more of them right?

That is where an OS optimized for multi-core can shine.

So in a situation where you are running a single application say a game and have no other apps runnings, and no other user services in the background this wouldn't do much.

However a situation where you have multiple apps running at the same time can be improved. Even windows 7 is rather weak when it comes to efficiently using 2+ processors and 3+ threads.


RE: OS vs Applications?
By MrPoletski on 9/28/2009 11:05:09 AM , Rating: 3
Any windows process can be assigned to a particular core and there many running regardelss of what you are doing. Most don't use much processor time at all, but some do.

You could have a single threaded game, but it makes calls to graphics drivers, sound drivers, perhaps a physics runtime and a disk Io handler. All those things can be run on a different core to the main game. However, this might not translate into a tangiable gain from adding extra cores because much of the work in that game is still done within the game engine because pipelining operations between all these subsystems the coder is writing for had not been thought of when coding that game.

To make what I just said make sense... a 'multicore ready' app is no good if each individual thread has to wait for the previous one to finish before it can start. yeah it'll use many cores, but it wont use them concurrently


RE: OS vs Applications?
By omnicronx on 9/28/2009 11:45:33 AM , Rating: 2
quote:
To make what I just said make sense... a 'multicore ready' app is no good if each individual thread has to wait for the previous one to finish before it can start. yeah it'll use many cores, but it wont use them concurrently
Asynchronous programming and multi threaded apps kind of go hand and hand these days. Anyone coding in the way you mention is not a good programmer. (IMO its not even a multicore 'ready' app in the first place, as opening another thread to have the first thread do nothing makes absolutely no sense).


RE: OS vs Applications?
By Murst on 9/29/2009 11:14:04 AM , Rating: 2
quote:
as opening another thread to have the first thread do nothing makes absolutely no sense

In even a pretty simple multi-threaded app, you generally need to create separate threads for security and thread safety. In many of these instances, it is very common and necessary to have the calling thread wait until the child thread has completed.


RE: OS vs Applications?
By erple2 on 9/29/2009 2:45:23 PM , Rating: 2
quote:
You could have a single threaded game, but it makes calls to graphics drivers, sound drivers, perhaps a physics runtime and a disk Io handler.


A multithreaded application will send those requests off to the graphics driver, sound driver, physics "driver", and IO handler and continue to do the next thing while a thread handles those. A non-multithreaded application won't do that. It'll continue to stagnate until the graphics driver returns status, the sound driver returns status, the physics runtime returns status/data, and the IO system returns status/data.


Questionable
By rs1 on 9/28/2009 3:06:26 PM , Rating: 1
quote:
Namely, Barrelfish uses message passing and a database like system to pass information between cores. Typically OS's use share memory schemes, which become very inefficient when resource demands are high.


Message passing and a *database* are more efficient than a shared memory block? I think that's very unlikely.




RE: Questionable
By wetwareinterface on 9/28/2009 6:34:58 PM , Rating: 4
quote:
Message passing and a *database* are more efficient than a shared memory block? I think that's very unlikely.


having one lump of memory that gets allocated out on a first come first served basis is not efficient. say you run app 1, app 2, then app 3. kill app2 and now you have a chunk of memory sitting unallocated that's between app 1's space and app 2's space. run app 4 and if it's larger than that space it now is going to sit in the leftover space and take extra space elsewhere.

how is this inneficient you ask when it's just memory registers? it works fine and the performance overhead is trivial when you are talking about a non-threaded kernal and non-threaded apps and a few apps and processes. when you have 100 services and 100 processes and 10 web browser windows each with it's own memory space, and 6 other apps open not to mention the drivers loaded and their memory address spaces and the clipboard with data in it etc...

having to manage all that becomes a mess on one core, now imagine the extra issues and overhead of having to maintain all that address space when each core is trying to access seperate memory spaces for the several app threads running on those cores and you have several cores.

a simple database of which core has access currently to what memory area and what data is allocated to what memory space makes it possible for a kernal to become far more threaded itself. a more threaded kernal when multi processing in large loads means more efficient use of memory space and core usage.

in simpe terms a database like memory manager means a smaller more nimble kernal that doesn't have to keep track of everything internally and can therfore be more freely threaded as can other heavily threaded apps and core usage can be more evenly distributed because of it, making it more efficient.


RE: Questionable
By Shining Arcanine on 9/29/2009 12:13:01 AM , Rating: 2
I thought that the article said that the kernel was single threaded.


RE: Questionable
By wetwareinterface on 9/29/2009 7:33:02 PM , Rating: 3
yes the kernal is single threaded, in this instance. on windows 7's succesor that this will be applied to it will be multithreaded.


RE: Questionable
By SublimeSimplicity on 9/29/2009 2:38:27 PM , Rating: 2
Memory isn't allocated linearly like that. A 1mb buffer is made up of 100s of blocks of memory spread out through out the physical address space on the memory chips. The MMU stitches these little blocks together so that they look linear to the CPU (and programmer). Memory fragmentation like you talk about hasn't been an issue in computers for 15+ years.

Now on a 32-bit OS fragmentation of virtual address space has started to become an issue, but 64-bit OSs allow so much virtual address space that this is no longer an issue.


RE: Questionable
By wetwareinterface on 9/29/2009 7:45:36 PM , Rating: 2
but memory buffers are lineraly allocated if at all possible. kernals don't track where the address is physically but logically they do. memory addresses in ascending order are blocked together if at all possible becuase it's easier to protect memory allocation that way. you protect your memory in a kernal space and do not allow anything else but the kernal to modify it. you watch out for rogue access of protected memory from outside the kernal and shut that down by locking memory adresses. you also have your kernal allow permission to memory for user space apps and those sit in unprotected memory. how do you do that if the memory addresses are willy nilly logically speaking?

memory fragmentation isn't the issue, the kernal having to be a massive bloated kernal just to keep track of memory is the issue. if you instead have a central storage of data of what's what and the kernal only has to track kernal memory and watch the database for rogue patterns, apps in user space can manage their own accesses leading to a smaller lighter faster kernal that can be more easily threaded.

when an app is sitting idle it is using memory all the same, when an app is sitting on a seperate core from the kernal it has to be far bigger to manage memory. unless you have a seperate means to monitor the memory usage and the kernal then only has to worry about it's own kernal space.


RE: Questionable
By SublimeSimplicity on 9/29/2009 2:14:08 PM , Rating: 2
The problem that is trying to be solved here is contention over resources. Even if you have 1000s of threads or processes running independently of each other doing seemingly discrete operations at some point they run into contention over a resource. Maybe that's the graphics buffer, network ring buffer, hdd dma buffers, whatever. At the end of the day all these things need to access a limited number of HW resources, otherwise what are they accomplishing?

When they do hit those contentions they need to line up in a nice single file line and your many cores become useless.

By going to a transaction based model with a single threaded kernel determining the sequence of the transactions, you eliminate these synchronization points and allow the threads to continue to work independently and concurrently.

So you trade off memory bandwidth efficiency (write / read / write to create and process the transactions) for core efficiency (more concurrent operations). The more cores you have and threads that can use them concurrently, the more appealing this trade off becomes.

This is the same reason that SQL databases eventually overtook many apps sharing a flat file to store and retrieve data.


Nit Pick
By Qur371 on 9/29/2009 8:51:34 AM , Rating: 2
It's Cambridge UK, not Cambridge Mass (Different country, different continent :-) )




RE: Nit Pick
By Lifted on 9/29/2009 5:18:52 PM , Rating: 2
No, it's actually Cambridge, Massachusetts.


RE: Nit Pick
By Behlal on 9/29/2009 7:59:22 PM , Rating: 2
Microsoft has research centers in both Cambridge, UK and in Cambridge, Massachusetts. However, the person you responded to was correct, this research was performed in Cambridge, UK. The main web page has direct links for this.


RE: Nit Pick
By afkrotch on 9/30/2009 5:49:23 AM , Rating: 2
Microsoft Research Cambridge, is located in Cambridge, UK.
Microsoft Research New England, is located in Cambridge, Massachusetts.


why not span across multiple machines..
By sapi3n on 9/28/2009 11:26:00 AM , Rating: 2
take this idea further, to a supercomputer-type setup with multiple machines grouped to one desktop. the render farm should by now be obsolete..




By ussfletcher on 9/28/2009 12:28:24 PM , Rating: 1
Cloud computing?


By noxipoo on 9/28/2009 3:47:52 PM , Rating: 2
it would cut into Windows HPC profits.


Not a novel concept
By gstrickler on 9/28/2009 2:41:05 PM , Rating: 5
Mach explored this idea over 20 years ago. Mach had two key features, a "microkernel", and IPC message passing to communicate between threads and processes.

Short history, the message passing introduces too much latency to be used in the microkernel envisioned by Mach, however, later hybrid kernels keep the IPC, but use a small monolithic kernel and isolate many of the higher level functions into separate module. It's not the Mach microkernel, but it's not a monolithic kernel either, it's a hybrid that gives the advantages of using IPC with better performance. Many of those changes were rolled back into the BSD source. This is the basis of Mac OS X.

http://en.wikipedia.org/wiki/Mach_(kernel)

"The Mach virtual memory management system was also adopted by the BSD developers at CSRG, and appears in modern BSD-derived UNIX systems, such as FreeBSD. Neither Mac OS X nor FreeBSD maintain the microkernel structure pioneered in Mach, although Mac OS X continues to offer microkernel Inter-Process Communication and control primitives for use directly by applications."

"The lead developer on the Mach project, Richard Rashid, has been working at Microsoft since 1991 in various top-level positions revolving around the Microsoft Research division. Another of the original Mach developers, Avie Tevanian, was formerly head of software at NeXT, then Chief Software Technology Officer at Apple Computer until March 2006."




By thomasxstewart on 9/29/2009 12:12:08 AM , Rating: 2
Heres intresting link on Minimums to build 80 core cpu from Intel, its old article from Annand tech.

http://www.anandtech.com/cpuchipsets/showdoc.aspx?...

Drashek




Editing?
By gfxBill on 9/30/2009 8:58:41 PM , Rating: 2
Not normally one to complain, and I realise this is a blog, but the lack of proofing on this article is just atrocious. Almost reads like it was translated in places.




Come on Mick
By dragunover on 9/28/2009 3:48:40 PM , Rating: 1
Haven't you thought of revising your articles before submitting nonsense like this?




Shooting Fish in a Barrel
By Spookster on 9/28/2009 5:42:06 PM , Rating: 1
And now we can say finding security holes is like shooting fish in a barrel.




A little late...
By amanojaku on 9/28/09, Rating: -1
RE: A little late...
By Digimonkey on 9/28/2009 10:12:05 AM , Rating: 3
Windows Vista and 7 already take advantage of more than one core. They just don't do it in the most proficient way, which is what microsoft is trying to improve on with this new OS.


RE: A little late...
By amanojaku on 9/28/09, Rating: -1
RE: A little late...
By Flunk on 9/28/2009 10:53:33 AM , Rating: 2
It's a lot easier to multithread server loads. Even before multicore processors each connection = one thread was already being used as the standard. The lack of scaling from multiple CPUs is more due to 3rd party developers than Window's multicore support. The current method for multithreading in Windows is fine for current systems but once we have systems with > 24 cores (or half that with hyperthreading) it starts to become untenable for all but embarrassingly parallel operations (like media encoding). Also the articles you linked too are not relevant to this discussion and outdated.


RE: A little late...
By amanojaku on 9/28/09, Rating: -1
RE: A little late...
By omnicronx on 9/28/2009 12:50:24 PM , Rating: 2
Buddy Vista/7/2008 Do not share the same kernel, in fact Server 2008 shares its codebase/kernel with Vista and Server 2008 R2 will share its codebase/kernel with 7. The fact remains they are based on the same kernel between the server/workstation release in both cases, so I don't know what you are smoking to think that multithreaded apps are handled vastly different aside from obvious optimizations for background apps in the server additions and foregrounds apps in the desktop editions. Now I'm not saying there are not other optimizations, but I really doubt there are vast differences at the kernel level.

Also please stop reading wikipedia, as Server 08 R2 and Windows 7 share more than the 'base code'.

As for your XP statements.. Your statement is completely irrelevant, unless you can find a link between these 'inherent inefficiencies' and handling of multiple threads in Vista/7. You cannot just merely say because a multithreaded app runs better on XP than Vista that the way multiple threads are handled is the culprit. (as this was also the case with many single threaded apps). In fact of the few benchmarks you will see on the subject, Vista actually has lower latency in terms of inter core efficiency which is the main reason that I call foul on your remarks.


RE: A little late...
By nichow on 9/28/2009 12:50:59 PM , Rating: 3
Actually the methodology used in the article you reference isn't very good. Some of the more obvious problems are:

1- The author never specifies how the machines are staged (at least that I could find). He only indicates that they were dual and quad core machines. To be a valid comparison each OS tested would need to be installed on the same or identical machines to compare apples to apples. Even matching cpu's in two different machines is not sufficient because the rest of the machine config can change perf test results dramatically.

2- He only mentions tracking the clock time for test completion and then draws conclusions without supporting evidence. He would need to at least capture cpu, disk, and ram metrics to determine why the test results were different. This would allow a more convulsive determination as to what the bottleneck is. It may be ram or storage that is the bottleneck and not cpu core scaling. Making a conclusion without appropriate data, isn't very accurate.

3- The selected workload is suspect. Using the Visual Studio developer version of SQL is a bad workload. This is designed only for development and testing purposes, it is not a high performance SQL server that will utilize multiple cores effectively. A real SQL server should be used. As far as the message store or media center workloads go, they would have to be analyzed to determine if their behavior would test cpu scaling, but without collecting any of the metrics i mention above, this determination would be difficult.

I don't know how valid the article's conclusions are, but without more detailed metrics, not even the author can know.


RE: A little late...
By lotharamious on 9/28/2009 10:56:02 AM , Rating: 3
You state these things on a completely unfounded basis.
quote:
The Windows Server line uses multiple CPUs and cores more efficiently than the Windows desktop line, so Barrelfish is more of a desktop improvement than a server improvement.

What are you talking about? The Windows server and Windows desktop for the consumer use the exact same kernel since Vista/2008 R1. It's the uses of Windows server and Windows desktop that are completely different. Server applications are generally parallelizable since most data access is independent. The consumer desktop on the other hand generally runs many single-threaded applications that cannot be parallelized, and thus makes multi-core processors a bit underutilized.

Barrelfish is simply a different paradigm for an operating system, similar to how other experiemental Microsoft operating systems (like Midori) are different from the core Windows OS. Microsoft is trying to expand all of Computer Science by developing new types of thinking about computers and how they interact with electronics and people. Ask any Computer Scientist/Computer Engineer... this is incredibly exciting stuff from a theoretical standpoint.


RE: A little late...
By inighthawki on 9/28/09, Rating: 0
RE: A little late...
By MrPoletski on 9/28/2009 11:11:54 AM , Rating: 2
hmm, I feel slightly rude for pointing out that the OS is responsible for scheduling all of the tasks you have running and deciding which processor is 'free enough' to handle this new, difficult to quantify, workload that might rack it up to 100% usage while running at half speed or leave it having regular bouts of noops because the other process it is dependant on is running on an overworked processor.


RE: A little late...
By inighthawki on 9/28/2009 2:57:35 PM , Rating: 1
Yes I understand that, but while the OS doesn't do a perfect job, that still doesn't change the fact that the OS's ARE multi-core compatible, and more cores WILL show you a vast improvement when multitasking. This article is simply explaining that theyre improving the efficiency of multi-core cpus, not saying that they will work at all.


RE: A little late...
By mattclary on 9/28/2009 11:33:21 AM , Rating: 1
In 10 years (maybe less), you will be seeing a lot more than 4 cores.


RE: A little late...
By FITCamaro on 9/28/2009 11:45:34 AM , Rating: 2
In 2 years you'll be seeing more than 4 cores. 8 core CPUs are on the horizon. By this time next year they'll probably be talking about 12 cores or 16 cores.


RE: A little late...
By ElderTech on 9/28/2009 12:39:22 PM , Rating: 3
Quote from amanojaku:
"7 is a LOT faster but it's not available yet, so performance tests are premature at this point."

There are several performance tests of Win7 out there, particularly comparing it to both Vista and XP. Unfortunately, from a variety of results, it's evident it's NOT faster than XP in many if not most applications, although it is measurably faster than Vista. Here's one recent example of a comparison:

http://www.testfreaks.com/blog/information/windows...

From this and other comparisons, is seems Win7 is close to the performance of XP, and does offer a variety of advantages, including excellent 64bit and SSD support, as well as improved security. The latter of course assumes you have applied the Microsoft "Fix-it" Tool to the RC version to disable the SMB 2 code potential exploitation. This isn't an issue in the final release versions according to MS.


RE: A little late...
By StevoLincolnite on 9/28/2009 1:02:30 PM , Rating: 3
Most people are aware of the benchmarks circulating the Internet.

However the "Feel" of the operating system seems to be much quicker than that of XP, like... Start-up time, or Clicking on the Start Menu, or Opening "My Computer". - They all feel much snappier.

Then you have the drivers side of things... Games haven't yet started supporting Direct X 11, and drivers from companies like ATI/nVidia/Intel/Creative/Via etc' - Still aren't fully mature, so performance in some aspects can be a bit of a hit or miss, give it a few months after release and look at benchmarks of Windows 7 then.

It might not be as fast as XP in some benchmarks, but that would be like comparing Windows 98 to Windows XP, where Windows 98 was clearly superior in performance/compatibility for a very long time.


RE: A little late...
By omnicronx on 9/28/2009 1:15:09 PM , Rating: 2
quote:
Then you have the drivers side of things...
Not only video drivers but system drivers too. The benchmarks he posted were using Nvidia hardware based on drivers released within days of the RC release (i.e early may). Comparing barely post RC drivers with 8 years in the makings XP drivers is hardly a comparison, regardless if the 7 build was RTM or not.. I'm not saying the benchmarks will change, as it has become apparent that XP does perform better in certain situations, what I am saying people may want to wait until we see full release drivers before we start coming to conclusions. 680i hardware is not exactly new either, so its not going to be a main focus for Nvidia in the first place. Optimizations will be made for newer hardware before (if ever) it trickles down to the old.


RE: A little late...
By ElderTech on 9/28/2009 2:17:17 PM , Rating: 2
Granted, the driver issue is valid, as there is likely much more support coming in this area. However, even with the potential drivers support, there are definite advantages to XP over Win7, depending on the specifics of your applications and usage. To that effect, here's another recent comparison on mobile computing platforms from AnandTech utilizing two different processors in otherwise identical laptops:

http://www.anandtech.com/mobile/showdoc.aspx?i=364...

Although it's obvious the battery results favor Win7, there are a number of other results that clearly favor XP, particularly in the detailed PCMark05 breakdown results, and the OS Benchmarks.

That said, Win7 has specific advantages that are relevant to my usage, including 64bit and SSD support, particularly with the upcoming TRIM feature. But for others, particularly more casual users like students who need mobility with quick and easy OS access, XP is still the defacto choice, and will probably remain so for some time. That's why many colleges and universities, even the more tech savy ones, provide official support only for XP, and not even Vista.

An example is Carnegie Mellon University, where their campus-wide computer tech support conforms to the above. That's a school with a recently completed new computer science building almost entirely funded by the Gates. That may speak to the difficulties with the Vista implementation, and hopefully with Win7 it will be a different story.


RE: A little late...
By omnicronx on 9/28/2009 3:35:43 PM , Rating: 2
I don't mean to bash PCMark, but unless I see vantage results I'm not going to take these results seriously. Manuafacturers are not idiots, they know how to get the most out of the mainstream benchmark apps, and with PCMark 05, they've had a heck of a long time to do so..

Truth be told I don't trust it one bit, there were tests a year or two back changing the CPUID of a Via nano to Intel / AMD which showed discrepancies in their results where it should not have.

Now I am not saying these results are not true, I am just saying I will not trust any result until 7 is released and release drivers are used.


"There is a single light of science, and to brighten it anywhere is to brighten it everywhere." -- Isaac Asimov














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki