backtop


Print 28 comment(s) - last by matthewfoley.. on May 19 at 10:01 AM


I don't have permission to delete a shortcut? -- Image courtesy Paul Thurrott's SuperSite
Is Vista everything that Microsoft initially promised?

Paul Thurrott has posted yet another look at Windows Vista. Ever since the first alpha and beta releases of Longhorn/Vista hit the web, Paul has been giving us regular updates on the progress of the operating system. Paul's articles are usually for the most part positive with a little hint of negativity thrown in where appropriate.

Paul's latest article though lays everything out on the line when it comes to Vista. Now that Vista is supposedly feature complete and many things will stay as is when the final product ships, promises that Microsoft made in regards to features in the operating system, usability issues and application blunders are now fair game. Here, Paul rants about missing features that Microsoft promised:

There are so many more examples. But these two, WinFS and virtual folders, are the most dramatic and obvious. Someday, it might be interesting--or depressing, at least--to create a list of features Microsoft promised for Windows Vista, but reneged on. Here are a few tantalizing examples: A real Sidebar that would house system-wide notifications, negating the need for the horribly-abused tray notification area. 10-foot UIs for Sidebar, Windows Calendar, Windows Mail, and other components, that would let users access these features with a remote control like Media Center. True support for RAW image files include image editing. The list just goes on and on.

I must say, I've tried and tried to give Vista more than a second glance. I've tried every beta release that Microsoft has issued, but every time I find myself being less productive and utterly frustrated using the operating system compared to Windows XP.  Fortunately, it looks like Microsoft has a few more months to get some of these issues under control.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

By Pirks on 4/27/2006 12:55:58 AM , Rating: -1
quote:
Dave Cutler has nothing to do with BSD, sorry to disappoint you.
Care to share a link to where I claimed it ?


Cutler is not BSD guy at all. Cutler is VMS guy. And he's NT kernel architect. Now apply some logic and discover why NT kernel was not BSD derived.

quote:
Microsoft just took a part and incorporated it into their kernel


Yeah, you claim NT is BSD derived just because a piece of BSD IP stack is there. Alright, so Linux has a small piece of NTFS code in its kernel, so then I say Linux is NT derived :P

quote:
After you patch the code there is no distincition from the situation when you would make look the code look like this from the beginning.
Wrong, if you consider design as well. One thing is a hurried patch on top of a mess, and the other is a neat design from scratch. Functionality may be the same, agreed. But the code might be so different you won't believe your eyes. I say patched mess and non patched nice design are very different given that functionality is 100% the same.

It's like doing bubble sort and then spending time on patching it with hand tuned assembly, while all you have to do is just a quick sort. Got my point?

quote:
And example of linux kernel shows that this problem can be easily overcome.


Haha. Should I remind you of periodic wars among kernel devs on whether patch this patch that, or redo the pile of a stinky patched mess from the beginnig? I recall something about udev tree, and, er.. well, there were those compatibility issues with ABI for modules AFAIR (I forgot most of the details but you got the idea I'm sure) where Linux said smth like "screw binary stuff, we'll rewrite from scratch anything and anytime we want, so swallow it". It's because patching sometimes doesn't work and even Linus & Co have to break stuff, annihilate it and start from scratch once in a while. Well, there they go, your patches and fixes.

quote:
Yes, WPA supplicant and hostapd USE hardware drivers for Wi-Fi chipsets, but would you include 802.11i support in the DRIVER or in the KERNEL???

What about driver in the kernel ?


Since IEEE protocols and hardware drivers have little in common, your point is quite mystic. You mean protocols should be included in drivers or what? I said that including high-quality security daemons in kernel is good. OK, since running everything in rng 0 is bad, let's include that in layered kernel at ring 4, no big deal here. Just INCLUDE DAMN protocols, and they better be up to date, and they should be SEPARATE from drivers.

Now you might argue those security things are not a reason to rewrite a kernel. I agree. It's just pathetic wireless thing in XP that makes me claim radical things like screw that and do it from the scratch. I'm not on Windows team, of course I might be very very wrong here.

quote:
And you would design whole system around "no-execution bit" ?


What's wrong with that, if this is coupled with everything else? It only means that NO PARTS OF KERNEL EVER EVER allowed to do stack execution, NO MATTER WHAT. So if you wanted to do some neato trick in kernel and get +1% in performance by executing something from the stack - sorry, no lunch today, go and redesign your algorithm to avoid ANY stack execs. And this is enforced on everyone by turning no-exec bit on always, EXCEPT for SOME USER apps ONLY when USER SPECIFIED SO. Got it?

quote:
and when the whole industry moves forward to another more secure "standard" this whole pile of legacy shit within kerenel would do what ? - sit and lough in your face... How would you like it then huh ?


Excellent point! Here we go with loadable modules and stuff. Make these things modules, maybe even in ring 4 or so, maybe there's no need to go to highest privileges possible. Provide for upgreadability from the start, that's it. Be it modules or whatever.

quote:
4) 3D audio processing, other hi-def audio algorithms

Yes - definitely this would make sever admins endlessly cheer with this " advanced" "feature".


Sure it will. If kernel is monolithinc and utterly unconfigurable, one thing fits all variety. This should be avoided like a plague.

But when you go that route you trade elegancy for dumb user friendliness. Which is bad. So my whole imaginateve new cool kernel might turn out a sales dud because 99% of users will die in 1000 kernel options and settings.

Realistically, I think we can find a golden miean here, how about including a short list of pre-configured kernels or kernel modules, like "3D stuff", "audio stuff", "numerics" blah blah. So when seasoned Unix admin comes around he just turns off all the console crap and gets his lovely text shell.

quote:
I can imagine some appliances (like speeding some disk operations assuming that ethernet transfer>> disk throughput), but certainly nothing realistically beneficial/killer feature. And certainly nothing that would warrant writing new kernel for.


About 10GBit nets - how about "click a GUI button and turn your 2 PCs linked with 10GB link into an SMP box" feature?

And how about "add more boxes and get SMP with shared mem and as many CPUs as you have in your PCs together"?

Now did I beat your unimaginative brain or not yet? ;)

quote:
You know why GPU's are not commonly used in general computations ? Becouse vendors won't open up their spec. Your "stupid" example in no way support your former post.


Oooh cooome ooon man, why are you lying now about "manufacturers not opening their specs" when THEIR SPECS ARE MS STANDARDS SINCE DX10??? So this part of your answer is officially BS :) Now about GPGPU again. Yeah, kernel seems to have nothing to do with GPU. I mean kernel as a low level process scheduling/VMM thing. Yeah, it's hard to think of anything you'd do in kernel with a nice GPU. HOWEVER, IF you think in terms of that 10Gbit SMP feature I was smoking up above, wouldn't you say try to offer some extended computational resources to your apps, now when you went this way and started to build a cluster... a small cluster, maybe 3-4 PCs, still, if you think - you may have support in kernel for some BLAS/LAPACK.. ok, let's move it onto an ourside lib that's just tightly coupled with this shared cluster kernel, but runs in ring 4, so this now makes you happy, right? Oh, and it's a configurable module! So no BS about Unix guy not being able to get his shell. This thing has shell and stuff, but you can add those extra modules.

Now, you have some 10Gb net hardware, some GPUs, some multicores, you join it all and then you can throw your computing resources at whatever necessary. Notice that KERNEL is HEAVILY INVOLVED in all this stuff, 'cause all the net and coordination, AND NETWORK TRANSPARENT shared virtual memory (this is an answer for your "wtf is transparent network for?"), so that's why I say a lot about kernel, NOT meaning EVERYTHING is running in ring 0 and welded together like a huge unbreakable rock asteroid, it's all modular.

I think you can do some pretty neat things with such a setup, don't you agree? Even if it's smoked high and sounds crazy, it's NOT IMPOSSIBLE for technical reasons. It's just another five or so years of work for mighty Microsoft. Well, maybe 10 years. Is it worth it? Dunno. Probably anyone at MS would say just like you that I need medical help :) But they said this about many bright people, take medieval time when just for saying crzy things "Earth is round!" you'd be burned alive. So don't treat me harsh, even if I smoke, I still MIGHT have one of two DNA molecules of Copernicus or Jordano Bruno in me :))) Heheh.

quote:
Where are sutudies about feasibility of such things and ... why on earth Microsoft havn't hired you already ? You present such deep insight to the subject, that it just blows my mind.


You already answered your question. They don't hire me because 1) my stuff is the craziest thing one hears about OS kernels and 2) I don't have 10 year Yale or Stanford education, Ph.D. and Cutler's worth of military realitme cruising missile OS design. Heheh :)

quote:
In what way keeping support for legacy stuff makes platform suddenly new and exciting? I didn't say anything about it being exciting. It's just real life necessity.
In 5-10 years when the OS is finally out of the labs it's not a necessity anymore, so scre legacy again :)

Notice that if you upgrade Windows NT like MS does now this IS a necessity. But it's NOT when you design a new OS for say 2015 or so.

quote:
You know that virtualising "costs" - the currency being computer resources? Stupid nonsense - as you said.


Hahaha. Now say that to Steve Jobs and Bill Gates, whose both OSes include virtual machines inside, for DOS and MacOS 9. That's funny, real funny :)

By the way my imaginative kernel can have this VM for older Win32 stuff but it should be absolutely separate as in "I can avoid installing it and if I do there's not a bit of old technology on my machine". Translation: all the old compatibility layers are SEPARATE. So that one day MS sheds this layer easily. No dependencies on old stuff should be tolerated. I feel like you understand me here.

quote:
It takes development resources and time away from really important task of implementing things noone implemented before, and this is BAD, so screw floppies, PATA and other artefacts like these.

No it doesn't. Nobody is working on support for floppies - there is no development on it - just support.


Like I said if you wanna put legacy stuff - put it ONLY in that compatibility layer so that you can shed it ALL any time you want. It's a great idea, very elegant. I love it.

quote:
And please note that this standard appeared WITHOUT MS manufacturing any 3D hardware. Sorry to disappoint you here again, buddy.

Sorry for not being that much into gaming.


It has nothing to do with gaming, don't pretend being stupid again :) It's just a STANDARD GPU INTERFACE, nothing more nothing less. If games use it - why other things like that cluster stuff can't use it?

OK, in conclusion of our discussion I'd say you made a couple of valid points about encryption being integer, my mistake, so SSE won't make it in that area. Actually floating point is indeed hard to insert into the kernel anywhere so... you're kind of right. Kind of means there could be some side issues with SSE when you make that cluster SSE aware, as in devising proper network compression/streaming protocols for floating point streams or something... hard to say really, I'm not a networking kind of guy, I just heard Matlab has some FP specific complerrion going on in its disk routines so I'm going from there.

And nothing personal, when I say you're stupid it's just words, I know you're not :) Can't talk polite sometimes, sorry.


By logan77 on 4/30/2006 6:59:26 AM , Rating: 2
Yeah, you claim NT is BSD derived just because a piece of BSD IP stack is there. Alright, so Linux has a small piece of NTFS code in its kernel, so then I say Linux is NT derived :P


Nice try :) but I just ment that at least some code (IP stack) had been taken from BSD's - don't know how much or what's the impact, but the fact remains. Of course under the terms of BSD license one can just take a piece of code and put it in it's own project as long as he admit's it somewhere. I read once that they (MS) hired some BSD folks, but I cannot find any prove it as of now. Not talking about Dave Cutler :)


Wrong, if you consider design as well. One thing is a hurried patch on top of a mess, and the other is a neat design from scratch. Functionality may be the same, agreed. But the code might be so different you won't believe your eyes. I say patched mess and non patched nice design are very different given that functionality is 100% the same.

Well - you may have a point here - although there is nothing inborn in patches that makes them impaired from a technical point of view - in reality it's more difficult to drive whole project with them to different architectural design. But still - it does not imply that 'more patches = more clutter'. It's largerly dependant on the initial design. And non modular design definitely hampers producing neat and efficient code. Linux is modular, although it's monolithic kernel (contrary to microkernel).


It's because patching sometimes doesn't work and even Linus & Co have to break stuff, annihilate it and start from scratch once in a while. Well, there they go, your patches and fixes.

But they don't start _everything_ from the beginning - maybe some stuff has to be removed once in a while, but nothing THAT radical as you claim. The overall kernel design stays untouched.


About 10GBit nets - how about "click a GUI button and turn your 2 PCs linked with 10GB link into an SMP box" feature?

One word : latency.


And how about "add more boxes and get SMP with shared mem and as many CPUs as you have in your PCs together"?

And add more latency ? :) I mean really - if you are talking about distributed computing i.e. have a specific task that is easilly parallelised than solutions are there. They may not be perfect or easy to deploy from casual user perspective, but neither it should be. Distributed computing is a specific area not so appealing in every day uses (say editing word document, converting to pdf, photoshop editing). Of course they may be some benefits (like applying some filters in photoshop/gimp to a large collection of files , dvd rip's etc.) , but probably nothing _kernel_specific_ . It's in the application layer of abstratcion, not kernel itself, although OS like MS Win probably could ship this functionality. Problem remains with authentication on those other computers (you would have to have an account on them). If for example the second computer being yours sister's she wouldn't be very pelased to see her task crawling becouse you started ripping your porn :) using her computer now would she ? But mutliple concurrent logins on one machine would be very welcome (not sure it it's not already available). If you have many computers than you probably have specific task to perform (render farms) and solutions are already available.
For gaming this stuff probably will never be appliable (to slow response time).


Oooh cooome ooon man, why are you lying now about "manufacturers not opening their specs" when THEIR SPECS ARE MS STANDARDS SINCE DX10??? So this part of your answer is officially BS :)

DirectX API is not meant to perform general computations. It is good for transforming triangles, but reverse discrete fourier transform computations are not made that easily on it, right ? What is crucial is 'specs' i.e. low level instructions (not some DX operations manufacturers "kindly" permit you to use).


Hahaha. Now say that to Steve Jobs and Bill Gates, whose both OSes include virtual machines inside, for DOS and MacOS 9. That's funny, real funny :)

DOS run doesn't has to run fast, since everything that was written for had been done with slow computers in mind, now we have much faster rigs. Don't know details about MacOS9 emulation so I won't comment on this one.


It has nothing to do with gaming, don't pretend being stupid again :) It's just a STANDARD GPU INTERFACE, nothing more nothing less. If games use it - why other things like that cluster stuff can't use it?

Maybe because there is no need for it ? What exactly task would you perform on GPU's on network ? Gaming won't do, becouse of latency. Rendering is OK - but it is ALREADY being done by apps, so ... ? And why this god damns DX ?!? If you would like a real "standard" than pick OpenGL - it's platform independent and more suitable for "seriuous" tasks (CAD, rendering etc.). DX (which incroporates not only graphic API, but also sound specific stuff) was introduced as solution for gaming industry ONLY. It should be fast, but not necessarily accurate. I would gladly see it being ditched in favour of OpenGL instead of trying to lock down everyone to their proprietary formats.

What would be of great value - is a powerful command line tool (POSIX compliant shell ?) , new file system without constant need for defragmentation (ext3 ? - neee - that would be to good to be true), easy support for other files systems (JFS, XFS, ReiserFS), more flexibility during installation (e.g. more MBR options, more other systems awarness), changing registry in sth. of more ellegant design, enforcing and better handling of multiple accounts (now amost everyone on desktop is using root accounts), more modular design (separating text mode from graphical interface?, those modules you've mentioned...).


By Pirks on 5/4/2006 1:16:48 PM , Rating: 1
quote:
I read once that they (MS) hired some BSD folks, but I cannot find any prove it as of now. Not talking about Dave Cutler :)


Never heard of BSD folks there but wouldn't be surprised they use some even these days, say, for Vista, since it has uniform IPv6 for everything with IPv4 as a fallback. BSD sure should be of use here, so there is a place for them.

However, the design of the original hybrid NT 3.0 kernel itself was done under the lead of Dave Cutler, who was one of designers of OpenVMS and got many of his ideas from there, so some people even called NT 3.1 a "microVMS" or personal miniVMS, somethihng like that. It's all in wikipedia and all over the net, kinda classic MS history stuff, who did what and why.

quote:
in reality it's more difficult to drive whole project with them to different architectural design


This is exactly what I meant. Reality check shows that it's often the case that original design could be patched to death but this patching has to end sooner or later. So, basically, I was watching how MS dumped DOS and Win9x instead of patching them to death because their design was too ugly, and I also saw Jobs killing even more ugly classic Mac OS, after 20 years of patching it. This, however, is never going to happen with Linux or FreeBSD or any other open source OS. It's only a feature of large commercial OSes, and only the successful ones (say, OS/2 could benefit from rewriting its workplace shell and presentation manager, because these were quite crappy, and also its kernel is a mess of 386 specific assembly and would benefit from redoing it properly instead of patching, but it won't happen because OS/2 does not satisfy that "successful" part of definition).

So, here you see the roots of my logic. I was watching MS promising heaven on earth and then delivering refreshed XP instead. OK, I'm may be wrong and it's not just a refresh, but you know after hearing all this WinFS stuff which makes me remember Soltis and his OS/400 alien beauty and such... I feel that Vista is not that alien beauty. It's gonna be great and large upgrade, but maybe it's not the revolution yet, in the same sense as NT and Mac OS X were.

Hence my thoughts about "stop lagging behind this stupid toy Mac OS X and do something major, like blow Macs away with a total new OS in 10 years". I think so because I saw how good was the radical transition both for MS and Macs, how far they jumped both by ditching overpatched DOS and Mac OS classic. And I wanted to see the repetition of the same story. I wanted to see MS stepping up and telling competitors "you'll get your medicine, just wait until we finish design of our new kernel". Hell, they could even take this L4 thing which is quite interesting, and organize uniform architecture on top of it. Cut out auxiliary servers and fit the OS into smartphone, or add a bunch of those servers and fit THE SAME OS into an enterprise mainframe. I call this cool, but not the Vista. Vista is a nice update, but it is essentially a huge patch for XP, and this new scalable OS probably requires going from hybrid kernel to L4 like kernel. And this is not a patchy patch, it's a design from scratch.

quote:
But they don't start _everything_ from the beginning - maybe some stuff has to be removed once in a while, but nothing THAT radical as you claim. The overall kernel design stays untouched.


Like I said, this is not because patching is better than dumping old design and switching to a clean new one, it's simply because in open source world patching and evolution is the only way to move forward. Linus can't afford dumping monolithic design and switch to L4 internally, that's too big a deal for him. The open source can NOT shed skin and come out like a butterfly from a cocoon. The only way for them to grow is to drift where the wind blows. Is there an Itanium coming along? Here's your Itanium patch. Is there AMD64? Here's the patch. Mainframe? Patch. Anything else? Patch. Well, how about replacing monolithic kernel with L4 with a set of independent servers? Nope, it's not a patch, so it won't work, sorry, bye-bye.

It's all ok while old design can keep it up, so you are right here. Now tell me what would they do if they were writing open source PC-DOS in 1980? Will they evolve into enterprise Unix or NT 3.1 ten years later ONLY BY PATCHING DOS? Whoa! Sounds funny, isn't it?

quote:
If for example the second computer being yours sister's she wouldn't be very pelased to see her task crawling becouse you started ripping your porn :)


LOL :)) Well, distributed ripping of porn is just one of the things that might be useful in future. Porn does not require a cluster, but how about a cluster being used to monitor things around a house? Imagine this scenario - I have a nice house, too nice to be ignored by local thieves, so I buy a set or wireless hidden cams, set them everywhere on the perimeter and plug the video feeds in my cluster. The cluster has those 10Gbit or 100Gbit links, and all my 5 or so home PCs are busy watching out for burglars through all the night. Neat! But only if MS cares about making such a plug'n'play cluster a part of their OS, or this new kernel. I don't wanna spend my life setting up beowulf, too complex and I'm a bit lazy sometimes.

You are right that distributed computing is not very useful for today's tasks, but think where would they use all the wireless toys 20 years ago? There was absolutely no market for this stuff, but now things evolved and voila - radiowaves everywhere! What makes you think that advances in computer vision in 20 years won't make my scenario possible? How about throwing all my house computing resources to rip a nice 150GByte Violet Ray DVD? Takes 6 hours on one 8-core PC or only 1 hour if I click a button on my desktop and tell my Windows "please get all the cores in the house involved, except for my sister's"

Hence my crazy fantasies about clusters and embedding that in a new kernel (or a server above microkernel which may be a better idea). I just project the situation in 1980 onto the situation nowadays and extrapolate it 20 years further.

It's hard to say whether NT kernel in present form is suitable for this kind of future tasks. Maybe it is, but because of the feature creep maybe it is not. If it takes 6 years for the vendor to patch his OS (XP->Vista transition) it may be a sign that too much bloat and patches are in. Maybe NT got so fat and messy because clean design of Cutler has turned into who knows what in Vista? Look, this is what happened with DOS! They got clean design for Intel 8088, then they went up to 80386 by building a whole freakin WinMe mansion on top of the original DOS mud hut. Now look at NT. They've got clean design for 1991 era of PCs, essentially for the same 80386. Where are we 15 years later? We are looking at OS X which is so different from NT, because it was designed from scratch about 10 years later than NT, and it has different everything, especially 3D accelerated PDF-based windowing system which Vista copied as well. So why Vista is so late? Is it because nice clean kernel design was downplayed by moronic managers who couldn't understand what to add and what to change to ship product on time? Or was it because clean design of NT has been patched to some amorphous blob where adding things is real hard? They got very talented, not moronic at all managers and tried to quickly patch XP into Vista. They've got this 6 year delay as a result. This is why I'm asking now - "is it time to avoid another 8 year delay after this one and then 15 year delay after that?" Because I'm afraid Vista got delayed because of the architectural reasons, not because BG was in charge as some say. I'm not sure patching WinMe into WinMe2003 were a good idea, it would take a long time and produce a nightmare, so they dumped it. The same story for Vista - I'm not sure taking this old design forward will somehow prevent significant delays in future. When you patch old mess like WinMe - it is the same as patching old mess called XP and hoping that you can ship nice lean Vista in 2 years. Know what? Doesn't work. 6 years PLUS cut off of many cool features they promised. No WinFS, nothing. Compared with Windows itself it's a progress, compared with OS X it's a failure, BECAUSE... right, you got it! Because OS X was designed much later than NT hence it has a better design, in a sense that it's better suited to modern hardware (come OOON who in their sane mind would fit PDF into a windowing system at Microsoft in 1989?? Cutler would shoot anyone proposing that in the head with a big nasty railgun, I tell ya! see how Jobs beat them now?), and it will age slower than Windows just because of that. Very simple logic, easy to grasp, right?

quote:
DirectX API is not meant to perform general computations. It is good for transforming triangles, but reverse discrete fourier transform computations are not made that easily on it, right ?


Right, but there is still gpgpu.org out there, which means what? I think it means as GPUs evolve and become more and more flexible, people will find more and more ways to tap their computational power. So why MS wouldn't add GPGPU specs to their DX 11 or 12? Why not, IF there is demand? You are right, there's not a lot of demand now, there's only gpgpu.org, there's Havok which uses GPU as a helper engine, not much, yeah. You know what future holds? I am not, this is why I propose crazy ideas, to not let MS be beaten by competitors in future :-)

quote:
DOS run doesn't has to run fast, since everything that was written for had been done with slow computers in mind, now we have much faster rigs. Don't know details about MacOS9 emulation so I won't comment on this one.


Virtual machines are everywhere, all old OSes and their apps are always emulated on newer hardware using virtual machines, and this is why PCs are slowly starting picking up traits of IBM VM OS. This is a global trend, and I just extrapolated it in future. Hence my words about this new kernel being IBM VM like, which is sooo far from current NT kernel... you can NOT patch NT kernel into IBM VM competitor. Try to prove the opposite and see how your patchy sandcastle crumbles :)

quote:
Maybe because there is no need for it ? What exactly task would you perform on GPU's on network ? Gaming won't do, becouse of latency. Rendering is OK - but it is ALREADY being done by apps, so ... ? And why this god damns DX ?!? If you would like a real "standard" than pick OpenGL - it's platform independent and more suitable for "seriuous" tasks


As for OpenGL issue - MS just doesn't want to lose time waiting when manufacturer A adds a necessary feature B on hardware C, they want things made fast and uniform, click and install experience, and most important they can afford it. Trust me, Apple would dump OpenGL a long ago if they were as big.

What task uniform DX is good for? In current form probably not much, only stuff from gpgpu and Havok, which is not for home users for sure. Still, why not add extensions in DX 11 or 12 which would run some background computations on idle shader units? You have 48 uniform shaders in your GPU, say right now you're not running Quake 6, and 24 units are idling, but you want some DVD decode, video recode, a nonlinear video edit/transition, whatever - bingo, you got that DX waiting for orders.

Your problem is that you see DX as a child gaming thing, which is a big mistake. Look at Apple Aperture, they use GPU in a Photoshop like environment, look at nVidia - they work with video in their GPU hardware. It is not about gaming only. It's about using your monster GPU minicomputer for everything that requires number crunching, EVERYTHING! Uniform DX10, and constant growth of programmability in shaders, and amount of local video RAM - all this points NOT only in the direction of Quake 6, as you're trying to tell me. General trend (GENERAL!) is to make GPU excellent flexible renderer, WHILE using it for whatever possible while not playing. And the number of software titles that use GPUs not for gaming is slowly INCREASING, and it's a matter of time until MS decides that DX now is good not only for video extensions, but could also benefit users with computer vision and speech recognition extensions for example. WHICH USE IDLE GPU FOR COMPUTATION! See what I mean?

quote:
is a powerful command line tool (POSIX compliant shell ?) , new file system without constant need for defragmentation (ext3 ? - neee - that would be to good to be true), easy support for other files systems (JFS, XFS, ReiserFS), more flexibility during installation (e.g. more MBR options, more other systems awarness), changing registry in sth. of more ellegant design, enforcing and better handling of multiple accounts (now amost everyone on desktop is using root accounts), more modular design (separating text mode from graphical interface?, those modules you've mentioned...)


For cool text shell - check out Monad, promised to be a wonder. Non fragmenting file system is great, but where did you get numbers supporting your claim that NTFS is more fragmenting FS than ext3? Is there a link with some research about it or you just believe in ext3? ;-) Registry is good enough already, you have to propose justufied changes. What do you want changed there and why? The only thing that could be useful is some automatic background backup for registry, but external utils exist for that, I think. Account separation is enforced in Vista and I'd say enforced too much :) Should decrease number of root users in Windows, I think. As for text-GUI separation - cool idea but MS won't ever care about it, they always made GUI default and Apple did the same even earlier. People who want small fast text kernel with cool shell - they have a crowd of open source OSes to choose from. MS won't go there, just as Linux won't go for dumping text shell and switching to X11 GUI only.


“So far we have not seen a single Android device that does not infringe on our patents." -- Microsoft General Counsel Brad Smith

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki