backtop


Print 43 comment(s) - last by gfxBill.. on Sep 30 at 8:58 PM


Microsoft's Barrelfish operating system is an experimental OS looking to bring improved multicore performance to Microsoft's OS's  (Source: Network World)

"Barrelfish hackers and hangers-on, Zurich, August 2009 "  (Source: Microsoft/ETH Zurich)
Microsoft tests out multi-core improvements that will eventually be rolled into Windows

Microsoft has long cooked up new and experimental operating systems whose features eventually got rolled into its central Windows offerings.  Most recently it's been dabbling with Singularity, an experimental OS designed for increased reliability thanks to kernel, device drivers, and applications being written in managed SING# (an extension of C#) code.  Another test OS is Midori (not to be confused with the web browser), an OS that sandboxes applications for security and is designed for running concurrent applications, a feature geared towards cloud computing schemes.

Other recent efforts include its Windows Azure OS, a cloud computing OS currently offered for free to developers.

Now Microsoft has unveiled another new OS prototype codenamed "Barrelfish".  Barrelfish is an OS optimized to run on multi-core machines.  Namely, Barrelfish uses message passing and a database like system to pass information between cores.  Typically OS's use share memory schemes, which become very inefficient when resource demands are high.

The new OS was jointly created by ETH Zurich, a technology university, and Microsoft Research, located in Cambridge, Mass. 

Interestingly, it uses some open source BSD third-part libraries, which are "covered by various BSD-like open source licenses."  This has led to speculation that the new OS may be free and open source, not terms you typically would associate with Microsoft.

According to developers who have attended conferences on the new OS, it reportedly brings some of the Midori/Singularity sandboxing protections onboard.  Additionally, applications reportedly have an alternate route of accessing information from devices like graphics or sound cards.  A large deal of device information is reportedly stored in a central database that can be queried.

Writes developer "AudriUSA", "... instead of fully isolating program from device via driver, Barrelfish has a kind of database where lots of low level information about the hardware can be found. The kernel is single threaded and non preemptive. Scheduling is coupled with the message passing, an arrival of the message simply activates the waiting thread. It also uses a little bit of the microkernel concepts, running drivers in protected space, like L4 and in general pushing a lot into application domains."

As Intel and AMD expand their 4, 6, and 8-core lineups and approach even higher core counts, using these resources efficiently will be a crucial operating system responsibility.  It will be exciting to see what kind of improvements that Microsoft can accomplish with Barrelfish, as these improvements will surely be rolled into successors to Windows 7.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Questionable
By rs1 on 9/28/2009 3:06:26 PM , Rating: 1
quote:
Namely, Barrelfish uses message passing and a database like system to pass information between cores. Typically OS's use share memory schemes, which become very inefficient when resource demands are high.


Message passing and a *database* are more efficient than a shared memory block? I think that's very unlikely.




RE: Questionable
By wetwareinterface on 9/28/2009 6:34:58 PM , Rating: 4
quote:
Message passing and a *database* are more efficient than a shared memory block? I think that's very unlikely.


having one lump of memory that gets allocated out on a first come first served basis is not efficient. say you run app 1, app 2, then app 3. kill app2 and now you have a chunk of memory sitting unallocated that's between app 1's space and app 2's space. run app 4 and if it's larger than that space it now is going to sit in the leftover space and take extra space elsewhere.

how is this inneficient you ask when it's just memory registers? it works fine and the performance overhead is trivial when you are talking about a non-threaded kernal and non-threaded apps and a few apps and processes. when you have 100 services and 100 processes and 10 web browser windows each with it's own memory space, and 6 other apps open not to mention the drivers loaded and their memory address spaces and the clipboard with data in it etc...

having to manage all that becomes a mess on one core, now imagine the extra issues and overhead of having to maintain all that address space when each core is trying to access seperate memory spaces for the several app threads running on those cores and you have several cores.

a simple database of which core has access currently to what memory area and what data is allocated to what memory space makes it possible for a kernal to become far more threaded itself. a more threaded kernal when multi processing in large loads means more efficient use of memory space and core usage.

in simpe terms a database like memory manager means a smaller more nimble kernal that doesn't have to keep track of everything internally and can therfore be more freely threaded as can other heavily threaded apps and core usage can be more evenly distributed because of it, making it more efficient.


RE: Questionable
By Shining Arcanine on 9/29/2009 12:13:01 AM , Rating: 2
I thought that the article said that the kernel was single threaded.


RE: Questionable
By wetwareinterface on 9/29/2009 7:33:02 PM , Rating: 3
yes the kernal is single threaded, in this instance. on windows 7's succesor that this will be applied to it will be multithreaded.


RE: Questionable
By SublimeSimplicity on 9/29/2009 2:38:27 PM , Rating: 2
Memory isn't allocated linearly like that. A 1mb buffer is made up of 100s of blocks of memory spread out through out the physical address space on the memory chips. The MMU stitches these little blocks together so that they look linear to the CPU (and programmer). Memory fragmentation like you talk about hasn't been an issue in computers for 15+ years.

Now on a 32-bit OS fragmentation of virtual address space has started to become an issue, but 64-bit OSs allow so much virtual address space that this is no longer an issue.


RE: Questionable
By wetwareinterface on 9/29/2009 7:45:36 PM , Rating: 2
but memory buffers are lineraly allocated if at all possible. kernals don't track where the address is physically but logically they do. memory addresses in ascending order are blocked together if at all possible becuase it's easier to protect memory allocation that way. you protect your memory in a kernal space and do not allow anything else but the kernal to modify it. you watch out for rogue access of protected memory from outside the kernal and shut that down by locking memory adresses. you also have your kernal allow permission to memory for user space apps and those sit in unprotected memory. how do you do that if the memory addresses are willy nilly logically speaking?

memory fragmentation isn't the issue, the kernal having to be a massive bloated kernal just to keep track of memory is the issue. if you instead have a central storage of data of what's what and the kernal only has to track kernal memory and watch the database for rogue patterns, apps in user space can manage their own accesses leading to a smaller lighter faster kernal that can be more easily threaded.

when an app is sitting idle it is using memory all the same, when an app is sitting on a seperate core from the kernal it has to be far bigger to manage memory. unless you have a seperate means to monitor the memory usage and the kernal then only has to worry about it's own kernal space.


RE: Questionable
By SublimeSimplicity on 9/29/2009 2:14:08 PM , Rating: 2
The problem that is trying to be solved here is contention over resources. Even if you have 1000s of threads or processes running independently of each other doing seemingly discrete operations at some point they run into contention over a resource. Maybe that's the graphics buffer, network ring buffer, hdd dma buffers, whatever. At the end of the day all these things need to access a limited number of HW resources, otherwise what are they accomplishing?

When they do hit those contentions they need to line up in a nice single file line and your many cores become useless.

By going to a transaction based model with a single threaded kernel determining the sequence of the transactions, you eliminate these synchronization points and allow the threads to continue to work independently and concurrently.

So you trade off memory bandwidth efficiency (write / read / write to create and process the transactions) for core efficiency (more concurrent operations). The more cores you have and threads that can use them concurrently, the more appealing this trade off becomes.

This is the same reason that SQL databases eventually overtook many apps sharing a flat file to store and retrieve data.


“And I don't know why [Apple is] acting like it’s superior. I don't even get it. What are they trying to say?” -- Bill Gates on the Mac ads














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki