backtop


Print 28 comment(s) - last by matthewfoley.. on May 19 at 10:01 AM


I don't have permission to delete a shortcut? -- Image courtesy Paul Thurrott's SuperSite
Is Vista everything that Microsoft initially promised?

Paul Thurrott has posted yet another look at Windows Vista. Ever since the first alpha and beta releases of Longhorn/Vista hit the web, Paul has been giving us regular updates on the progress of the operating system. Paul's articles are usually for the most part positive with a little hint of negativity thrown in where appropriate.

Paul's latest article though lays everything out on the line when it comes to Vista. Now that Vista is supposedly feature complete and many things will stay as is when the final product ships, promises that Microsoft made in regards to features in the operating system, usability issues and application blunders are now fair game. Here, Paul rants about missing features that Microsoft promised:

There are so many more examples. But these two, WinFS and virtual folders, are the most dramatic and obvious. Someday, it might be interesting--or depressing, at least--to create a list of features Microsoft promised for Windows Vista, but reneged on. Here are a few tantalizing examples: A real Sidebar that would house system-wide notifications, negating the need for the horribly-abused tray notification area. 10-foot UIs for Sidebar, Windows Calendar, Windows Mail, and other components, that would let users access these features with a remote control like Media Center. True support for RAW image files include image editing. The list just goes on and on.

I must say, I've tried and tried to give Vista more than a second glance. I've tried every beta release that Microsoft has issued, but every time I find myself being less productive and utterly frustrated using the operating system compared to Windows XP.  Fortunately, it looks like Microsoft has a few more months to get some of these issues under control.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By logan77 on 4/23/2006 9:37:09 AM , Rating: 1
Dave Cutler has nothing to do with BSD, sorry to disappoint you

Care to share a link to where I claimed it ?


You obviously mixed MS with Apple here

No I did not. The difference being Apple took (almost ?) entire kernel to build a system upon it. Microsoft just took a part and incorporated it into their kernel (AFAIK it was NT 3.1), links:

http://support.microsoft.com/default.aspx?scid=htt...

go to the copyright part : "Acknowledgements...This product includes software developed by the University of California, Berkeley and its contributors....Portions of this product are based in part on the work of the Regents of the University of California, Berkeley and its contributors. Because Microsoft has included the Regents of the University of California, Berkeley, software in this product, Microsoft is required to include the following text that accompanied such software..."

And this is release notes for WindowsXP !!!

http://www.ussg.iu.edu/hypermail/linux/kernel/9906...

http://www.zdnet.com.au/news/security/0,2000061744...


Exactly! This is what I'm talking about! The old kernel might not be able to scale a lot and now we gonna have dual and quad core CPUs everywhere, who knows how many cores in 5-10 years? Maybe 8
cores?


So you don't know if Vista scales good or bad and still wan't a rewrite of code. If it performes well, then why rewrite - since we don't know that such statements are little to early - don't you thinks so ?


I wouldn't be very surprised with 8 cores... this is why it's time to write a new kernel which would work on multiple cores only.

Here we go again ... why can't it work both on single core and multiple cores ? Ahhh becouse you like only the brand new shiny stuff, that in no way brings back memories of old "museum crap". Man ! - all you need is a gaming console ...


Being old classic is not equal being the best thing for the job.

True. But I didn't say otherwise.


You're obviously a Unix guy, I can smell it.

I work on linux, yes. You are not very on topic with it, are you ?

back on technical merits, shall we ? :

I don't like IPSec being baked on, I don't like security and encryption being patched modules on patched kernels with filesystem patches like icing on a cake. I hate it, really really.

Discuss it with your psychoanalyst, this is not the best place for it. Patch in itself is only a modification of a piece of software, nothing more nothing less. It can be huge/critical or just a small tiny little cosmetic change. After you patch the code there is no distincition from the situation when you would make look the code look like this from the beginning. Stop being so anal-retentive about it. The sole disadvantage of software development based on patches is managment of them, not technical merits. And example of linux kernel shows that this problem can be easily overcome.


Yes, WPA supplicant and hostapd USE hardware drivers for Wi-Fi chipsets, but would you include 802.11i support in the DRIVER or in the KERNEL???

What about driver in the kernel ?


and I thought you'd understand why patch for a feature is different from the system designed from scratch around that feature.

And you would design whole system around "no-executino bit" ? Congratulations !! The "no-execution bit" stuff is low level kernel thing, that nothing above it has to support it in any way whatsoever. It's job of an operating system to support it, and when it's done there is nothing to do more about it.


Still denying kernel can't and shouldn't support built in standard high-grade security/encryption instead of a bunch of userland patches on top of a crowd of various browsers?
Keep denying, no problem. You just need some educational reading of books by, say, Frank Soltis and Dave Cutler to understand what I'm trying to tell you.


Yeah! put this stuff into the kernel - great idea !!! - then you have no choice (which was done for you by the kernel devs.), and when the whole industry moves forward to another more secure "standard" this whole pile of legacy shit within kerenel would do what ? - sit and lough in your face... How would you like it then huh ?


Jesus smokin Christ... I'm so tired of typing, but I'll just write it all up and store in my essays folder to reuse later coz there are crowds of guys like you on the forums. About SSE stuff - you are very unimaginative person, in addition to being illiterate in OS architectures.

If technical merits would be sufficient you wouldn't resort to personal attacks, would you ?


I can imagine a lot of stuff for SSE right away. Let's start:

I just can't wait !...

1) WinFS, indexing, searches, while working on encrypted file system - SSE might be useful to speed numerics up

Aaaaahhh - talking about WinFS which doesn't even exist ? It's short for Windows -->Future<-- Storage :) . So there is your problem - you didn't get this promised candy ... Indexing is strictly Integer type operation and SSE floating point registers are ...well floating point registers designed speciffically to speed up multimedia ecnoding/decoding. Searches ? How ???? I think you should really provide some links here.
>"Speed up numerics" ? What numerics (in the kernel) ??? If you have app than you can easily use those SSE instructions advantage (actually it's compiler job to do), nothing kernel specific. It shows your abolute, total utter lack of knowledge here. You jest want something new regardless of technical merits, real benefits.


2) encryption of
pervasive IPSec/VPNs and everything else kernel needs encrypted


And you would like to make this encryption happen on floating point registers ? Heeehh....


4) 3D audio processing, other hi-def audio algorithms

Yes - definitely this would make sever admins endlessly cheer with this " advanced" "feature". Seriously though - you want (again) a console - not an operating system.

6) standard physics support, Havok-style (gonna be a part of DirectX eventually, I think)

So you are a gamer :) ... Like there is something like "standard physics in games" :) chisus man (child ?) - physics support for games in kernel would make this new advanced unrivalled operating system ? With Direct X standard ? - almost falled of chair :) You make my day


and kernel support for 802.11i AGAIN, and kernel support for IPSec (which should be default IP stack) AGAIN and using new hardware features like SSE for encryption in kernel AGAIN and smart load balancing between cores when doing multitasking with on the fly compression/encryption AGAIN, so please stop pretending to be dumb again.

At lest you don't have to.


how about beating your unimaginative brain _again_?

Just becouse I don't smoke what you do doens't make my brain unimaginative.


Imagine new network transparent WinFS which is optimized for ributed storage across 10gbit links. Yeah, gigabit is gonna step aside soon, so if MS is to write new kernel from the scratch it makes sense to tune it to 10gbit links and go from there.

What is that you want exactly (assuming that you know what you want) ? 1GBE (10?) support in kernel or "transparent network" ? And what the hell does transparent network mean anyway and what what uses would it have ? I can imagine some appliances (like speeding some disk operations assuming that ethernet transfer>> disk throughput), but certainly nothing realistically beneficial/killer feature. And certainly nothing that would warrant writing new kernel for. Man - you _are_ hopeless.


You never heard about www.gpgpu.org so I won't waste time explaining it all here. Go and read some research literature before giving me your stupid answers PLEEAASEE!!!

WTF are blathering about ??!?!? I heard of this project. It's up to apps to use it - what the hell kernel has to do with it ? You know why GPU's are not commonly used in general computations ? Becouse vendors won't open up their spec. Your "stupid" example in no way support your former post.


You just can't understand that patching things forever doesn't work for commercial OS vendors

It works, _if_ the basic design was decent enough. Did you hear about Solaris, QNX ? No ? Becouse there are no games for it (& DirectX 9999) ?


I want every bit of a kernel designed with SMP in mind. EVERYTHING that could be parallelized MUST BE parallelized WITHOUT ANY COMPROMISE!

Which parts _exactly_ should be improved that way ? Where are sutudies about feasibility of such things and ... why on earth Microsoft havn't hired you already ? You present such deep insight to the subject, that it just blows my mind.


In what way keeping support for legacy stuff makes platform suddenly new and exciting?

I didn't say anything about it being exciting. It's just real life necessity.


I'm not against support for Win32 in a new OS, it's easy to do by running Win32 inside its own virtual machine, but why even think about using old architecture inside the kernel itself? Stupid nonsense of course.

You know that virtualising "costs" - the currency being computer resources? Stupid nonsense - as you said.


But DO NOT USE legacy hardware in kernel!

It's just support for fat16 file system - what's wrong with that ? In what way will getting rid of it improve operating system ?


It takes development resources and time away from really important task of implementing things noone implemented before, and this is BAD, so screw floppies, PATA and other artefacts like these.

No it doesn't. Nobody is working on support for floppies - there is no development on it - just support. Why PATA is artefact ? Does your HDD has continues read transfers anywhere near 100MB/s (133?). And seriously - this is small part of code, and definitely in no way does this stupid shit you are writing here supports your thesis that there is a need for a new kernel.


Actually 3D HAL is already there and is called DX10. Congrats with keeping your eyes closed. And please note that this standard appeared WITHOUT MS manufacturing any 3D hardware. Sorry to disappoint you here again, buddy.

Sorry for not being that much into gaming. So you propose inserting (gaming) "standard" DirectX into kernel (again) ? If anything I would enforce broader adoption of OpenGL - which BTW _is_ a industry acclaimed platform-independent OPEN ->standard<-.

Didn't want to distract you from your gaming ... "buddy".


By logan77 on 4/30/2006 6:59:26 AM , Rating: 2
Yeah, you claim NT is BSD derived just because a piece of BSD IP stack is there. Alright, so Linux has a small piece of NTFS code in its kernel, so then I say Linux is NT derived :P


Nice try :) but I just ment that at least some code (IP stack) had been taken from BSD's - don't know how much or what's the impact, but the fact remains. Of course under the terms of BSD license one can just take a piece of code and put it in it's own project as long as he admit's it somewhere. I read once that they (MS) hired some BSD folks, but I cannot find any prove it as of now. Not talking about Dave Cutler :)


Wrong, if you consider design as well. One thing is a hurried patch on top of a mess, and the other is a neat design from scratch. Functionality may be the same, agreed. But the code might be so different you won't believe your eyes. I say patched mess and non patched nice design are very different given that functionality is 100% the same.

Well - you may have a point here - although there is nothing inborn in patches that makes them impaired from a technical point of view - in reality it's more difficult to drive whole project with them to different architectural design. But still - it does not imply that 'more patches = more clutter'. It's largerly dependant on the initial design. And non modular design definitely hampers producing neat and efficient code. Linux is modular, although it's monolithic kernel (contrary to microkernel).


It's because patching sometimes doesn't work and even Linus & Co have to break stuff, annihilate it and start from scratch once in a while. Well, there they go, your patches and fixes.

But they don't start _everything_ from the beginning - maybe some stuff has to be removed once in a while, but nothing THAT radical as you claim. The overall kernel design stays untouched.


About 10GBit nets - how about "click a GUI button and turn your 2 PCs linked with 10GB link into an SMP box" feature?

One word : latency.


And how about "add more boxes and get SMP with shared mem and as many CPUs as you have in your PCs together"?

And add more latency ? :) I mean really - if you are talking about distributed computing i.e. have a specific task that is easilly parallelised than solutions are there. They may not be perfect or easy to deploy from casual user perspective, but neither it should be. Distributed computing is a specific area not so appealing in every day uses (say editing word document, converting to pdf, photoshop editing). Of course they may be some benefits (like applying some filters in photoshop/gimp to a large collection of files , dvd rip's etc.) , but probably nothing _kernel_specific_ . It's in the application layer of abstratcion, not kernel itself, although OS like MS Win probably could ship this functionality. Problem remains with authentication on those other computers (you would have to have an account on them). If for example the second computer being yours sister's she wouldn't be very pelased to see her task crawling becouse you started ripping your porn :) using her computer now would she ? But mutliple concurrent logins on one machine would be very welcome (not sure it it's not already available). If you have many computers than you probably have specific task to perform (render farms) and solutions are already available.
For gaming this stuff probably will never be appliable (to slow response time).


Oooh cooome ooon man, why are you lying now about "manufacturers not opening their specs" when THEIR SPECS ARE MS STANDARDS SINCE DX10??? So this part of your answer is officially BS :)

DirectX API is not meant to perform general computations. It is good for transforming triangles, but reverse discrete fourier transform computations are not made that easily on it, right ? What is crucial is 'specs' i.e. low level instructions (not some DX operations manufacturers "kindly" permit you to use).


Hahaha. Now say that to Steve Jobs and Bill Gates, whose both OSes include virtual machines inside, for DOS and MacOS 9. That's funny, real funny :)

DOS run doesn't has to run fast, since everything that was written for had been done with slow computers in mind, now we have much faster rigs. Don't know details about MacOS9 emulation so I won't comment on this one.


It has nothing to do with gaming, don't pretend being stupid again :) It's just a STANDARD GPU INTERFACE, nothing more nothing less. If games use it - why other things like that cluster stuff can't use it?

Maybe because there is no need for it ? What exactly task would you perform on GPU's on network ? Gaming won't do, becouse of latency. Rendering is OK - but it is ALREADY being done by apps, so ... ? And why this god damns DX ?!? If you would like a real "standard" than pick OpenGL - it's platform independent and more suitable for "seriuous" tasks (CAD, rendering etc.). DX (which incroporates not only graphic API, but also sound specific stuff) was introduced as solution for gaming industry ONLY. It should be fast, but not necessarily accurate. I would gladly see it being ditched in favour of OpenGL instead of trying to lock down everyone to their proprietary formats.

What would be of great value - is a powerful command line tool (POSIX compliant shell ?) , new file system without constant need for defragmentation (ext3 ? - neee - that would be to good to be true), easy support for other files systems (JFS, XFS, ReiserFS), more flexibility during installation (e.g. more MBR options, more other systems awarness), changing registry in sth. of more ellegant design, enforcing and better handling of multiple accounts (now amost everyone on desktop is using root accounts), more modular design (separating text mode from graphical interface?, those modules you've mentioned...).


By Pirks on 5/4/2006 1:16:48 PM , Rating: 1
quote:
I read once that they (MS) hired some BSD folks, but I cannot find any prove it as of now. Not talking about Dave Cutler :)


Never heard of BSD folks there but wouldn't be surprised they use some even these days, say, for Vista, since it has uniform IPv6 for everything with IPv4 as a fallback. BSD sure should be of use here, so there is a place for them.

However, the design of the original hybrid NT 3.0 kernel itself was done under the lead of Dave Cutler, who was one of designers of OpenVMS and got many of his ideas from there, so some people even called NT 3.1 a "microVMS" or personal miniVMS, somethihng like that. It's all in wikipedia and all over the net, kinda classic MS history stuff, who did what and why.

quote:
in reality it's more difficult to drive whole project with them to different architectural design


This is exactly what I meant. Reality check shows that it's often the case that original design could be patched to death but this patching has to end sooner or later. So, basically, I was watching how MS dumped DOS and Win9x instead of patching them to death because their design was too ugly, and I also saw Jobs killing even more ugly classic Mac OS, after 20 years of patching it. This, however, is never going to happen with Linux or FreeBSD or any other open source OS. It's only a feature of large commercial OSes, and only the successful ones (say, OS/2 could benefit from rewriting its workplace shell and presentation manager, because these were quite crappy, and also its kernel is a mess of 386 specific assembly and would benefit from redoing it properly instead of patching, but it won't happen because OS/2 does not satisfy that "successful" part of definition).

So, here you see the roots of my logic. I was watching MS promising heaven on earth and then delivering refreshed XP instead. OK, I'm may be wrong and it's not just a refresh, but you know after hearing all this WinFS stuff which makes me remember Soltis and his OS/400 alien beauty and such... I feel that Vista is not that alien beauty. It's gonna be great and large upgrade, but maybe it's not the revolution yet, in the same sense as NT and Mac OS X were.

Hence my thoughts about "stop lagging behind this stupid toy Mac OS X and do something major, like blow Macs away with a total new OS in 10 years". I think so because I saw how good was the radical transition both for MS and Macs, how far they jumped both by ditching overpatched DOS and Mac OS classic. And I wanted to see the repetition of the same story. I wanted to see MS stepping up and telling competitors "you'll get your medicine, just wait until we finish design of our new kernel". Hell, they could even take this L4 thing which is quite interesting, and organize uniform architecture on top of it. Cut out auxiliary servers and fit the OS into smartphone, or add a bunch of those servers and fit THE SAME OS into an enterprise mainframe. I call this cool, but not the Vista. Vista is a nice update, but it is essentially a huge patch for XP, and this new scalable OS probably requires going from hybrid kernel to L4 like kernel. And this is not a patchy patch, it's a design from scratch.

quote:
But they don't start _everything_ from the beginning - maybe some stuff has to be removed once in a while, but nothing THAT radical as you claim. The overall kernel design stays untouched.


Like I said, this is not because patching is better than dumping old design and switching to a clean new one, it's simply because in open source world patching and evolution is the only way to move forward. Linus can't afford dumping monolithic design and switch to L4 internally, that's too big a deal for him. The open source can NOT shed skin and come out like a butterfly from a cocoon. The only way for them to grow is to drift where the wind blows. Is there an Itanium coming along? Here's your Itanium patch. Is there AMD64? Here's the patch. Mainframe? Patch. Anything else? Patch. Well, how about replacing monolithic kernel with L4 with a set of independent servers? Nope, it's not a patch, so it won't work, sorry, bye-bye.

It's all ok while old design can keep it up, so you are right here. Now tell me what would they do if they were writing open source PC-DOS in 1980? Will they evolve into enterprise Unix or NT 3.1 ten years later ONLY BY PATCHING DOS? Whoa! Sounds funny, isn't it?

quote:
If for example the second computer being yours sister's she wouldn't be very pelased to see her task crawling becouse you started ripping your porn :)


LOL :)) Well, distributed ripping of porn is just one of the things that might be useful in future. Porn does not require a cluster, but how about a cluster being used to monitor things around a house? Imagine this scenario - I have a nice house, too nice to be ignored by local thieves, so I buy a set or wireless hidden cams, set them everywhere on the perimeter and plug the video feeds in my cluster. The cluster has those 10Gbit or 100Gbit links, and all my 5 or so home PCs are busy watching out for burglars through all the night. Neat! But only if MS cares about making such a plug'n'play cluster a part of their OS, or this new kernel. I don't wanna spend my life setting up beowulf, too complex and I'm a bit lazy sometimes.

You are right that distributed computing is not very useful for today's tasks, but think where would they use all the wireless toys 20 years ago? There was absolutely no market for this stuff, but now things evolved and voila - radiowaves everywhere! What makes you think that advances in computer vision in 20 years won't make my scenario possible? How about throwing all my house computing resources to rip a nice 150GByte Violet Ray DVD? Takes 6 hours on one 8-core PC or only 1 hour if I click a button on my desktop and tell my Windows "please get all the cores in the house involved, except for my sister's"

Hence my crazy fantasies about clusters and embedding that in a new kernel (or a server above microkernel which may be a better idea). I just project the situation in 1980 onto the situation nowadays and extrapolate it 20 years further.

It's hard to say whether NT kernel in present form is suitable for this kind of future tasks. Maybe it is, but because of the feature creep maybe it is not. If it takes 6 years for the vendor to patch his OS (XP->Vista transition) it may be a sign that too much bloat and patches are in. Maybe NT got so fat and messy because clean design of Cutler has turned into who knows what in Vista? Look, this is what happened with DOS! They got clean design for Intel 8088, then they went up to 80386 by building a whole freakin WinMe mansion on top of the original DOS mud hut. Now look at NT. They've got clean design for 1991 era of PCs, essentially for the same 80386. Where are we 15 years later? We are looking at OS X which is so different from NT, because it was designed from scratch about 10 years later than NT, and it has different everything, especially 3D accelerated PDF-based windowing system which Vista copied as well. So why Vista is so late? Is it because nice clean kernel design was downplayed by moronic managers who couldn't understand what to add and what to change to ship product on time? Or was it because clean design of NT has been patched to some amorphous blob where adding things is real hard? They got very talented, not moronic at all managers and tried to quickly patch XP into Vista. They've got this 6 year delay as a result. This is why I'm asking now - "is it time to avoid another 8 year delay after this one and then 15 year delay after that?" Because I'm afraid Vista got delayed because of the architectural reasons, not because BG was in charge as some say. I'm not sure patching WinMe into WinMe2003 were a good idea, it would take a long time and produce a nightmare, so they dumped it. The same story for Vista - I'm not sure taking this old design forward will somehow prevent significant delays in future. When you patch old mess like WinMe - it is the same as patching old mess called XP and hoping that you can ship nice lean Vista in 2 years. Know what? Doesn't work. 6 years PLUS cut off of many cool features they promised. No WinFS, nothing. Compared with Windows itself it's a progress, compared with OS X it's a failure, BECAUSE... right, you got it! Because OS X was designed much later than NT hence it has a better design, in a sense that it's better suited to modern hardware (come OOON who in their sane mind would fit PDF into a windowing system at Microsoft in 1989?? Cutler would shoot anyone proposing that in the head with a big nasty railgun, I tell ya! see how Jobs beat them now?), and it will age slower than Windows just because of that. Very simple logic, easy to grasp, right?

quote:
DirectX API is not meant to perform general computations. It is good for transforming triangles, but reverse discrete fourier transform computations are not made that easily on it, right ?


Right, but there is still gpgpu.org out there, which means what? I think it means as GPUs evolve and become more and more flexible, people will find more and more ways to tap their computational power. So why MS wouldn't add GPGPU specs to their DX 11 or 12? Why not, IF there is demand? You are right, there's not a lot of demand now, there's only gpgpu.org, there's Havok which uses GPU as a helper engine, not much, yeah. You know what future holds? I am not, this is why I propose crazy ideas, to not let MS be beaten by competitors in future :-)

quote:
DOS run doesn't has to run fast, since everything that was written for had been done with slow computers in mind, now we have much faster rigs. Don't know details about MacOS9 emulation so I won't comment on this one.


Virtual machines are everywhere, all old OSes and their apps are always emulated on newer hardware using virtual machines, and this is why PCs are slowly starting picking up traits of IBM VM OS. This is a global trend, and I just extrapolated it in future. Hence my words about this new kernel being IBM VM like, which is sooo far from current NT kernel... you can NOT patch NT kernel into IBM VM competitor. Try to prove the opposite and see how your patchy sandcastle crumbles :)

quote:
Maybe because there is no need for it ? What exactly task would you perform on GPU's on network ? Gaming won't do, becouse of latency. Rendering is OK - but it is ALREADY being done by apps, so ... ? And why this god damns DX ?!? If you would like a real "standard" than pick OpenGL - it's platform independent and more suitable for "seriuous" tasks


As for OpenGL issue - MS just doesn't want to lose time waiting when manufacturer A adds a necessary feature B on hardware C, they want things made fast and uniform, click and install experience, and most important they can afford it. Trust me, Apple would dump OpenGL a long ago if they were as big.

What task uniform DX is good for? In current form probably not much, only stuff from gpgpu and Havok, which is not for home users for sure. Still, why not add extensions in DX 11 or 12 which would run some background computations on idle shader units? You have 48 uniform shaders in your GPU, say right now you're not running Quake 6, and 24 units are idling, but you want some DVD decode, video recode, a nonlinear video edit/transition, whatever - bingo, you got that DX waiting for orders.

Your problem is that you see DX as a child gaming thing, which is a big mistake. Look at Apple Aperture, they use GPU in a Photoshop like environment, look at nVidia - they work with video in their GPU hardware. It is not about gaming only. It's about using your monster GPU minicomputer for everything that requires number crunching, EVERYTHING! Uniform DX10, and constant growth of programmability in shaders, and amount of local video RAM - all this points NOT only in the direction of Quake 6, as you're trying to tell me. General trend (GENERAL!) is to make GPU excellent flexible renderer, WHILE using it for whatever possible while not playing. And the number of software titles that use GPUs not for gaming is slowly INCREASING, and it's a matter of time until MS decides that DX now is good not only for video extensions, but could also benefit users with computer vision and speech recognition extensions for example. WHICH USE IDLE GPU FOR COMPUTATION! See what I mean?

quote:
is a powerful command line tool (POSIX compliant shell ?) , new file system without constant need for defragmentation (ext3 ? - neee - that would be to good to be true), easy support for other files systems (JFS, XFS, ReiserFS), more flexibility during installation (e.g. more MBR options, more other systems awarness), changing registry in sth. of more ellegant design, enforcing and better handling of multiple accounts (now amost everyone on desktop is using root accounts), more modular design (separating text mode from graphical interface?, those modules you've mentioned...)


For cool text shell - check out Monad, promised to be a wonder. Non fragmenting file system is great, but where did you get numbers supporting your claim that NTFS is more fragmenting FS than ext3? Is there a link with some research about it or you just believe in ext3? ;-) Registry is good enough already, you have to propose justufied changes. What do you want changed there and why? The only thing that could be useful is some automatic background backup for registry, but external utils exist for that, I think. Account separation is enforced in Vista and I'd say enforced too much :) Should decrease number of root users in Windows, I think. As for text-GUI separation - cool idea but MS won't ever care about it, they always made GUI default and Apple did the same even earlier. People who want small fast text kernel with cool shell - they have a crowd of open source OSes to choose from. MS won't go there, just as Linux won't go for dumping text shell and switching to X11 GUI only.


"It seems as though my state-funded math degree has failed me. Let the lashings commence." -- DailyTech Editor-in-Chief Kristopher Kubicki

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki