Print 28 comment(s) - last by matthewfoley.. on May 19 at 10:01 AM

I don't have permission to delete a shortcut? -- Image courtesy Paul Thurrott's SuperSite
Is Vista everything that Microsoft initially promised?

Paul Thurrott has posted yet another look at Windows Vista. Ever since the first alpha and beta releases of Longhorn/Vista hit the web, Paul has been giving us regular updates on the progress of the operating system. Paul's articles are usually for the most part positive with a little hint of negativity thrown in where appropriate.

Paul's latest article though lays everything out on the line when it comes to Vista. Now that Vista is supposedly feature complete and many things will stay as is when the final product ships, promises that Microsoft made in regards to features in the operating system, usability issues and application blunders are now fair game. Here, Paul rants about missing features that Microsoft promised:

There are so many more examples. But these two, WinFS and virtual folders, are the most dramatic and obvious. Someday, it might be interesting--or depressing, at least--to create a list of features Microsoft promised for Windows Vista, but reneged on. Here are a few tantalizing examples: A real Sidebar that would house system-wide notifications, negating the need for the horribly-abused tray notification area. 10-foot UIs for Sidebar, Windows Calendar, Windows Mail, and other components, that would let users access these features with a remote control like Media Center. True support for RAW image files include image editing. The list just goes on and on.

I must say, I've tried and tried to give Vista more than a second glance. I've tried every beta release that Microsoft has issued, but every time I find myself being less productive and utterly frustrated using the operating system compared to Windows XP.  Fortunately, it looks like Microsoft has a few more months to get some of these issues under control.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

By xxtyderxx on 4/20/2006 7:54:19 PM , Rating: 1
I think it will be a lot more organized and all, but the interface is very different from what many people are used to for Windows. When Vista was going to be released, I was immediately going to buy it. But then it got pushed back, and features removed now.. I am rethinking altogether about Windows and Microsoft. I'm actually thinking of moving to Mac, that's where I think all the good features are and a great interface to get used to. (I also think Macs are very organized as well).

RE: Well
By Zelvek on 4/20/2006 8:01:03 PM , Rating: 3
I would too but the support for games is't there and that is one of the bigest uses of a computer for me. Though with boot camp the idea looks more appealing.

RE: Well
By xxtyderxx on 4/20/2006 8:03:56 PM , Rating: 3
I do agree. I am not a gamer, but a developer and a Mac would be good for people like me. But, if I needed to, I could use Windows with Boot Camp. I also heard Boot Camp will be installed on Leopard, the new Mac software coming probably before Vista.

RE: Well
By egrefen on 4/20/2006 8:04:15 PM , Rating: 2
I was just thinking that. Is there any real reason why you would need Vista for games? I mean, if they run under XP, technically you'd be just fine and dandy running them on an intel iMac with boot camp and XP, no?

RE: Well
By Ard on 4/20/2006 8:33:02 PM , Rating: 2
DX10. Now, granted, the majority, if not all, DX10 games will have a DX9/XP path, but I think it's going to eventually get to a point where you're going to need DX10, and thus Vista, to truly appreciate the game and get the proper performance. And of course if you want to play Halo 2 on your PC Vista is required.

RE: Well
By Exodus220 on 4/20/06, Rating: 0
RE: Well
By Plasmoid on 4/20/2006 9:06:12 PM , Rating: 2
I can see it now... loads of kiddies with Halo 2 but no vista to play it on.

Really, who is going to be interested in a dated game Like Halo 2 running an all new expensive (if you want a decent edition ) operating system with UT2007 or Crysis to run on XP.

I really think Microsoft is going to have to rethink the now DX10 for Xp idea when you consider how long ME was lingering on in the face of the vastly superior xp (and 98 but thats another argument). I dont think vista offers anything for the non-enthusiast and i think outside such ploys as DX10 and Halo 2 there is no compelling reason for anyone else.

RE: Well
By RDGadz on 4/20/2006 10:51:41 PM , Rating: 2
DirectX is one of the major reasons I am running windows and not some version of Linux... and I hardly play the latest games on the computer, or 360 for that matter.

I want support for the most stuff, to have the most options, and the easiest time installing drivers. Right now drivers for both 32 and 64 bit aren't the lease bit stable on vista.

When things work the way they should on vista I am blown away with its speed and usability.

Just like everyone doubted windows XP, Vista is the one to have doubts now, but we will all NEED it in the future.

As for now, I will be okay with the BSOD (wow when is the last time I have seen that), lack of network functionality, and failure to play Warcraft, on Microsoft's latest and greatest; Windows Vista.

RE: Well
By poohbear on 4/21/2006 8:51:10 AM , Rating: 2
"Just like everyone doubted windows XP, Vista is the one to have doubts now, but we will all NEED it in the future."

huh? who doubted XP? everyone was raving about XPs stability and features compared to the win98/95 OS. infact, the main reason i upgraded was because of all the good reviews. if vista is indeed worth upgrading to, i'll do it, but not until there's a general consensus that it's stable and more convenient.

RE: Well
By matthewfoley on 5/19/2006 10:01:10 AM , Rating: 2
No one would doubt the stability improvements of XP over 98. The debate of XP vs 2000 Pro had some merit at the time of XP's release.

RE: Well
By Burning Bridges on 4/21/2006 6:38:52 PM , Rating: 2
There are a few games out there that use the open-source OpenGL alternative to Direct X, Doom 3 and Quake 4 being among them. Wih OpenGL the games can be easily ported to both Linux and Mac OS, all that really needs to happen is that more developers go down that route, as it would effectively remove one of Vista's major selling points (the need to have the latest Direct X) for PC gamers.

By OCedHrt on 4/20/2006 9:29:10 PM , Rating: 2
Wow talk about being agitated. Plus, it's quite easy to tell which window is in focus. It's the one where the X is colored. DUH. I think Paul's been abducted by aliens and this isn't really him.

RE: Wow
By Thrawn on 4/20/2006 10:10:56 PM , Rating: 2
The point was that it is a much smaller difference. As a test I showed it to my mom and she wasn't able to figure out the difference. And she fits the average computer user mold better than anyone I know.

RE: Wow
By iamright on 4/20/2006 11:43:20 PM , Rating: 2
The point is to not have to look at the red X. The point is to make the top window obviously on top. I can tell easily enough but what about the old people that can hardly use windows XP which is not even as bad?

RE: Wow
By Bonrock on 4/21/2006 1:32:55 AM , Rating: 2
I agree with Paul that the window focus thing is a problem. You shouldn't have to look that closely to figure out which one is in focus. MacOS has exactly this problem too, but Windows XP doesn't. Why regress in Vista?

Save us
By Scabies on 4/21/2006 11:11:36 AM , Rating: 2
Hey, as long as it works and has a fully functional XP emulation suite, I'll be fine. I say emulation, because thats essentially what has to be done to enjoy the classics these days. Final Fantasy VII, old OLD school titles like Sam and Max and Wing Commander II, and my nostalgic library never worked on XP without massive driver/OS modifications and stuff like DOS-Box. Will we see the same thing elbowing out all of the current XP titles, Half Life 2, BF2, Unreal whatever-they-have-these-days, will they be plagued with the "hey, you have a new OS, tough luck, this OS only allows the cool new games we're gonna make you buy now. Oh, and Solitare 64bit edition" disease?

RE: Save us
By suryad on 4/22/2006 5:06:53 PM , Rating: 2
God I love those games you mentioned.

By logan77 on 4/21/2006 5:29:31 AM , Rating: 1
So you are basically totally unhappy with current Vista situation, yet feel confident that totall rewrite of the code ground-up by the same very company will magically change this state...

They should DESIGN NEW KERNEL FROM SCRATCH! And then lay some nice convenient GUI and other subsystems on top. JUST LIKE THEY DID WITH NT WHEN THEY SAW DOS DAYS ARE OVER. It's that simple.

AFAIR they took some parts of the BSD kernel and hired folks from BSD team when they were "designing" NT's that simple. Writing new kernel is anything but simple, no matter how much money you put into it. It takes time, patience, vision etc. If it were _that_ simple as your post suggests why Microsoft haven't done it already for Vista ?

Reason 2: there's currently a HUGE transition happening in PC world. Multicore CPUs, EFI, pervasive wireless, Internet, e-commerce, all the new hardware, no-execution bit, SSE2/3, Intel VT and AMD Pacifica, gigabit and 3D accelerators in every mobo ..... NEW WORLD requires a NEW KERNEL (plus everything else). Simple, eh?

And where are the figures that show how Vista scales with number of cpu's increasing ? From what I know Linux kernel _easily_ scales up to 32 cpu's (with regard to tasks that require interoperability between cpu's), and probably well beyond it, so there is kernel that may do just what you ask for :) .
What about internet ? I thought is't about TCP/IP stack, which is what ? 25 years old (hint - not everything that is old is worth s#%@ ).

>"pervasive wireless" - isn't it a job for drivers regardless of the kernell ?

>"no-execution bit" - and I thought that Service Pack 2 already added support for this...

>"e-commerce" - web browsers role to make it happen, along with conforming to standards

>"SSE2/SSE3" - what does it have to do with kernel ? Do you ask for specific optimisations for this instruction sets to be made in kernel ? They are multimedia specific (hint - in kernel there will be no improvement), it's up to the apps to utilize it.

>"Intel VT and AMD Pacifica" - and your point is ... ? They will make it into Vista...

>"all the new hardware" - drivers _again_

>"gigabit" - what about it ?

>"3D accelerators in every mobo" - you will have your eye-candy in Vista so what's your point ? Isn't modern accelerators fully supported on WinXP already (by respective manufacturers) ?

>"NEW WORLD requires a NEW KERNEL" - no it does not ! - it's Microsoft that need's one :) (joke)

MS itself jumped from 9x to NT and thrived on that too. So this WILL WORK again and again. There is NO MAJOR DANGER IN WRITING A NEW OS FROM SCRATCH.

WinXP was made on a basis of Win2000, Win 2000 on WinNT and WinNT used BSD code... when exactly someone wrote an entire kernel from scratch that made it to mainstream ? Linux kernel for example is constantly being developed for the last 15 years and it is work in progress...the point is - this can be made, but takes time and a lot of it for that matter.

This new OS should <DREAMS ON>work on multicore CPUs ONLY, have built-in VMM like Xen or something using VT support from both Intel and AMD, be 64-bit ONLY, have NO SUPPORT FOR FLOPPIES (!!!), COM, LPT, PATA and museum crap like that

So what ... do you want to have every task to be at least 2 threds ? XEN is not ready yet, nor is it a Microsoft's baby. In what way not ditching support for 32 bit makes platform suddenly dated ? Don't like floppies ? - don't use them... Drivers for COM, LPT and PATA don't make kernel that much bigger do they ? (definitely not slower !)

support 3D accelerators from the start (HAL layer for 3D hardware anyone?)

If Microsoft starts to manufacture graphic cards then yes. Otherwise - no... It's the manufacturers who provide drivers for their stuff.

By logan77 on 4/23/2006 9:37:09 AM , Rating: 1
Dave Cutler has nothing to do with BSD, sorry to disappoint you

Care to share a link to where I claimed it ?

You obviously mixed MS with Apple here

No I did not. The difference being Apple took (almost ?) entire kernel to build a system upon it. Microsoft just took a part and incorporated it into their kernel (AFAIK it was NT 3.1), links:

go to the copyright part : "Acknowledgements...This product includes software developed by the University of California, Berkeley and its contributors....Portions of this product are based in part on the work of the Regents of the University of California, Berkeley and its contributors. Because Microsoft has included the Regents of the University of California, Berkeley, software in this product, Microsoft is required to include the following text that accompanied such software..."

And this is release notes for WindowsXP !!!,2000061744...

Exactly! This is what I'm talking about! The old kernel might not be able to scale a lot and now we gonna have dual and quad core CPUs everywhere, who knows how many cores in 5-10 years? Maybe 8

So you don't know if Vista scales good or bad and still wan't a rewrite of code. If it performes well, then why rewrite - since we don't know that such statements are little to early - don't you thinks so ?

I wouldn't be very surprised with 8 cores... this is why it's time to write a new kernel which would work on multiple cores only.

Here we go again ... why can't it work both on single core and multiple cores ? Ahhh becouse you like only the brand new shiny stuff, that in no way brings back memories of old "museum crap". Man ! - all you need is a gaming console ...

Being old classic is not equal being the best thing for the job.

True. But I didn't say otherwise.

You're obviously a Unix guy, I can smell it.

I work on linux, yes. You are not very on topic with it, are you ?

back on technical merits, shall we ? :

I don't like IPSec being baked on, I don't like security and encryption being patched modules on patched kernels with filesystem patches like icing on a cake. I hate it, really really.

Discuss it with your psychoanalyst, this is not the best place for it. Patch in itself is only a modification of a piece of software, nothing more nothing less. It can be huge/critical or just a small tiny little cosmetic change. After you patch the code there is no distincition from the situation when you would make look the code look like this from the beginning. Stop being so anal-retentive about it. The sole disadvantage of software development based on patches is managment of them, not technical merits. And example of linux kernel shows that this problem can be easily overcome.

Yes, WPA supplicant and hostapd USE hardware drivers for Wi-Fi chipsets, but would you include 802.11i support in the DRIVER or in the KERNEL???

What about driver in the kernel ?

and I thought you'd understand why patch for a feature is different from the system designed from scratch around that feature.

And you would design whole system around "no-executino bit" ? Congratulations !! The "no-execution bit" stuff is low level kernel thing, that nothing above it has to support it in any way whatsoever. It's job of an operating system to support it, and when it's done there is nothing to do more about it.

Still denying kernel can't and shouldn't support built in standard high-grade security/encryption instead of a bunch of userland patches on top of a crowd of various browsers?
Keep denying, no problem. You just need some educational reading of books by, say, Frank Soltis and Dave Cutler to understand what I'm trying to tell you.

Yeah! put this stuff into the kernel - great idea !!! - then you have no choice (which was done for you by the kernel devs.), and when the whole industry moves forward to another more secure "standard" this whole pile of legacy shit within kerenel would do what ? - sit and lough in your face... How would you like it then huh ?

Jesus smokin Christ... I'm so tired of typing, but I'll just write it all up and store in my essays folder to reuse later coz there are crowds of guys like you on the forums. About SSE stuff - you are very unimaginative person, in addition to being illiterate in OS architectures.

If technical merits would be sufficient you wouldn't resort to personal attacks, would you ?

I can imagine a lot of stuff for SSE right away. Let's start:

I just can't wait !...

1) WinFS, indexing, searches, while working on encrypted file system - SSE might be useful to speed numerics up

Aaaaahhh - talking about WinFS which doesn't even exist ? It's short for Windows -->Future<-- Storage :) . So there is your problem - you didn't get this promised candy ... Indexing is strictly Integer type operation and SSE floating point registers are ...well floating point registers designed speciffically to speed up multimedia ecnoding/decoding. Searches ? How ???? I think you should really provide some links here.
>"Speed up numerics" ? What numerics (in the kernel) ??? If you have app than you can easily use those SSE instructions advantage (actually it's compiler job to do), nothing kernel specific. It shows your abolute, total utter lack of knowledge here. You jest want something new regardless of technical merits, real benefits.

2) encryption of
pervasive IPSec/VPNs and everything else kernel needs encrypted

And you would like to make this encryption happen on floating point registers ? Heeehh....

4) 3D audio processing, other hi-def audio algorithms

Yes - definitely this would make sever admins endlessly cheer with this " advanced" "feature". Seriously though - you want (again) a console - not an operating system.

6) standard physics support, Havok-style (gonna be a part of DirectX eventually, I think)

So you are a gamer :) ... Like there is something like "standard physics in games" :) chisus man (child ?) - physics support for games in kernel would make this new advanced unrivalled operating system ? With Direct X standard ? - almost falled of chair :) You make my day

and kernel support for 802.11i AGAIN, and kernel support for IPSec (which should be default IP stack) AGAIN and using new hardware features like SSE for encryption in kernel AGAIN and smart load balancing between cores when doing multitasking with on the fly compression/encryption AGAIN, so please stop pretending to be dumb again.

At lest you don't have to.

how about beating your unimaginative brain _again_?

Just becouse I don't smoke what you do doens't make my brain unimaginative.

Imagine new network transparent WinFS which is optimized for ributed storage across 10gbit links. Yeah, gigabit is gonna step aside soon, so if MS is to write new kernel from the scratch it makes sense to tune it to 10gbit links and go from there.

What is that you want exactly (assuming that you know what you want) ? 1GBE (10?) support in kernel or "transparent network" ? And what the hell does transparent network mean anyway and what what uses would it have ? I can imagine some appliances (like speeding some disk operations assuming that ethernet transfer>> disk throughput), but certainly nothing realistically beneficial/killer feature. And certainly nothing that would warrant writing new kernel for. Man - you _are_ hopeless.

You never heard about so I won't waste time explaining it all here. Go and read some research literature before giving me your stupid answers PLEEAASEE!!!

WTF are blathering about ??!?!? I heard of this project. It's up to apps to use it - what the hell kernel has to do with it ? You know why GPU's are not commonly used in general computations ? Becouse vendors won't open up their spec. Your "stupid" example in no way support your former post.

You just can't understand that patching things forever doesn't work for commercial OS vendors

It works, _if_ the basic design was decent enough. Did you hear about Solaris, QNX ? No ? Becouse there are no games for it (& DirectX 9999) ?

I want every bit of a kernel designed with SMP in mind. EVERYTHING that could be parallelized MUST BE parallelized WITHOUT ANY COMPROMISE!

Which parts _exactly_ should be improved that way ? Where are sutudies about feasibility of such things and ... why on earth Microsoft havn't hired you already ? You present such deep insight to the subject, that it just blows my mind.

In what way keeping support for legacy stuff makes platform suddenly new and exciting?

I didn't say anything about it being exciting. It's just real life necessity.

I'm not against support for Win32 in a new OS, it's easy to do by running Win32 inside its own virtual machine, but why even think about using old architecture inside the kernel itself? Stupid nonsense of course.

You know that virtualising "costs" - the currency being computer resources? Stupid nonsense - as you said.

But DO NOT USE legacy hardware in kernel!

It's just support for fat16 file system - what's wrong with that ? In what way will getting rid of it improve operating system ?

It takes development resources and time away from really important task of implementing things noone implemented before, and this is BAD, so screw floppies, PATA and other artefacts like these.

No it doesn't. Nobody is working on support for floppies - there is no development on it - just support. Why PATA is artefact ? Does your HDD has continues read transfers anywhere near 100MB/s (133?). And seriously - this is small part of code, and definitely in no way does this stupid shit you are writing here supports your thesis that there is a need for a new kernel.

Actually 3D HAL is already there and is called DX10. Congrats with keeping your eyes closed. And please note that this standard appeared WITHOUT MS manufacturing any 3D hardware. Sorry to disappoint you here again, buddy.

Sorry for not being that much into gaming. So you propose inserting (gaming) "standard" DirectX into kernel (again) ? If anything I would enforce broader adoption of OpenGL - which BTW _is_ a industry acclaimed platform-independent OPEN ->standard<-.

Didn't want to distract you from your gaming ... "buddy".

By logan77 on 4/30/2006 6:59:26 AM , Rating: 2
Yeah, you claim NT is BSD derived just because a piece of BSD IP stack is there. Alright, so Linux has a small piece of NTFS code in its kernel, so then I say Linux is NT derived :P

Nice try :) but I just ment that at least some code (IP stack) had been taken from BSD's - don't know how much or what's the impact, but the fact remains. Of course under the terms of BSD license one can just take a piece of code and put it in it's own project as long as he admit's it somewhere. I read once that they (MS) hired some BSD folks, but I cannot find any prove it as of now. Not talking about Dave Cutler :)

Wrong, if you consider design as well. One thing is a hurried patch on top of a mess, and the other is a neat design from scratch. Functionality may be the same, agreed. But the code might be so different you won't believe your eyes. I say patched mess and non patched nice design are very different given that functionality is 100% the same.

Well - you may have a point here - although there is nothing inborn in patches that makes them impaired from a technical point of view - in reality it's more difficult to drive whole project with them to different architectural design. But still - it does not imply that 'more patches = more clutter'. It's largerly dependant on the initial design. And non modular design definitely hampers producing neat and efficient code. Linux is modular, although it's monolithic kernel (contrary to microkernel).

It's because patching sometimes doesn't work and even Linus & Co have to break stuff, annihilate it and start from scratch once in a while. Well, there they go, your patches and fixes.

But they don't start _everything_ from the beginning - maybe some stuff has to be removed once in a while, but nothing THAT radical as you claim. The overall kernel design stays untouched.

About 10GBit nets - how about "click a GUI button and turn your 2 PCs linked with 10GB link into an SMP box" feature?

One word : latency.

And how about "add more boxes and get SMP with shared mem and as many CPUs as you have in your PCs together"?

And add more latency ? :) I mean really - if you are talking about distributed computing i.e. have a specific task that is easilly parallelised than solutions are there. They may not be perfect or easy to deploy from casual user perspective, but neither it should be. Distributed computing is a specific area not so appealing in every day uses (say editing word document, converting to pdf, photoshop editing). Of course they may be some benefits (like applying some filters in photoshop/gimp to a large collection of files , dvd rip's etc.) , but probably nothing _kernel_specific_ . It's in the application layer of abstratcion, not kernel itself, although OS like MS Win probably could ship this functionality. Problem remains with authentication on those other computers (you would have to have an account on them). If for example the second computer being yours sister's she wouldn't be very pelased to see her task crawling becouse you started ripping your porn :) using her computer now would she ? But mutliple concurrent logins on one machine would be very welcome (not sure it it's not already available). If you have many computers than you probably have specific task to perform (render farms) and solutions are already available.
For gaming this stuff probably will never be appliable (to slow response time).

Oooh cooome ooon man, why are you lying now about "manufacturers not opening their specs" when THEIR SPECS ARE MS STANDARDS SINCE DX10??? So this part of your answer is officially BS :)

DirectX API is not meant to perform general computations. It is good for transforming triangles, but reverse discrete fourier transform computations are not made that easily on it, right ? What is crucial is 'specs' i.e. low level instructions (not some DX operations manufacturers "kindly" permit you to use).

Hahaha. Now say that to Steve Jobs and Bill Gates, whose both OSes include virtual machines inside, for DOS and MacOS 9. That's funny, real funny :)

DOS run doesn't has to run fast, since everything that was written for had been done with slow computers in mind, now we have much faster rigs. Don't know details about MacOS9 emulation so I won't comment on this one.

It has nothing to do with gaming, don't pretend being stupid again :) It's just a STANDARD GPU INTERFACE, nothing more nothing less. If games use it - why other things like that cluster stuff can't use it?

Maybe because there is no need for it ? What exactly task would you perform on GPU's on network ? Gaming won't do, becouse of latency. Rendering is OK - but it is ALREADY being done by apps, so ... ? And why this god damns DX ?!? If you would like a real "standard" than pick OpenGL - it's platform independent and more suitable for "seriuous" tasks (CAD, rendering etc.). DX (which incroporates not only graphic API, but also sound specific stuff) was introduced as solution for gaming industry ONLY. It should be fast, but not necessarily accurate. I would gladly see it being ditched in favour of OpenGL instead of trying to lock down everyone to their proprietary formats.

What would be of great value - is a powerful command line tool (POSIX compliant shell ?) , new file system without constant need for defragmentation (ext3 ? - neee - that would be to good to be true), easy support for other files systems (JFS, XFS, ReiserFS), more flexibility during installation (e.g. more MBR options, more other systems awarness), changing registry in sth. of more ellegant design, enforcing and better handling of multiple accounts (now amost everyone on desktop is using root accounts), more modular design (separating text mode from graphical interface?, those modules you've mentioned...).

By Pirks on 5/4/2006 1:16:48 PM , Rating: 1
I read once that they (MS) hired some BSD folks, but I cannot find any prove it as of now. Not talking about Dave Cutler :)

Never heard of BSD folks there but wouldn't be surprised they use some even these days, say, for Vista, since it has uniform IPv6 for everything with IPv4 as a fallback. BSD sure should be of use here, so there is a place for them.

However, the design of the original hybrid NT 3.0 kernel itself was done under the lead of Dave Cutler, who was one of designers of OpenVMS and got many of his ideas from there, so some people even called NT 3.1 a "microVMS" or personal miniVMS, somethihng like that. It's all in wikipedia and all over the net, kinda classic MS history stuff, who did what and why.

in reality it's more difficult to drive whole project with them to different architectural design

This is exactly what I meant. Reality check shows that it's often the case that original design could be patched to death but this patching has to end sooner or later. So, basically, I was watching how MS dumped DOS and Win9x instead of patching them to death because their design was too ugly, and I also saw Jobs killing even more ugly classic Mac OS, after 20 years of patching it. This, however, is never going to happen with Linux or FreeBSD or any other open source OS. It's only a feature of large commercial OSes, and only the successful ones (say, OS/2 could benefit from rewriting its workplace shell and presentation manager, because these were quite crappy, and also its kernel is a mess of 386 specific assembly and would benefit from redoing it properly instead of patching, but it won't happen because OS/2 does not satisfy that "successful" part of definition).

So, here you see the roots of my logic. I was watching MS promising heaven on earth and then delivering refreshed XP instead. OK, I'm may be wrong and it's not just a refresh, but you know after hearing all this WinFS stuff which makes me remember Soltis and his OS/400 alien beauty and such... I feel that Vista is not that alien beauty. It's gonna be great and large upgrade, but maybe it's not the revolution yet, in the same sense as NT and Mac OS X were.

Hence my thoughts about "stop lagging behind this stupid toy Mac OS X and do something major, like blow Macs away with a total new OS in 10 years". I think so because I saw how good was the radical transition both for MS and Macs, how far they jumped both by ditching overpatched DOS and Mac OS classic. And I wanted to see the repetition of the same story. I wanted to see MS stepping up and telling competitors "you'll get your medicine, just wait until we finish design of our new kernel". Hell, they could even take this L4 thing which is quite interesting, and organize uniform architecture on top of it. Cut out auxiliary servers and fit the OS into smartphone, or add a bunch of those servers and fit THE SAME OS into an enterprise mainframe. I call this cool, but not the Vista. Vista is a nice update, but it is essentially a huge patch for XP, and this new scalable OS probably requires going from hybrid kernel to L4 like kernel. And this is not a patchy patch, it's a design from scratch.

But they don't start _everything_ from the beginning - maybe some stuff has to be removed once in a while, but nothing THAT radical as you claim. The overall kernel design stays untouched.

Like I said, this is not because patching is better than dumping old design and switching to a clean new one, it's simply because in open source world patching and evolution is the only way to move forward. Linus can't afford dumping monolithic design and switch to L4 internally, that's too big a deal for him. The open source can NOT shed skin and come out like a butterfly from a cocoon. The only way for them to grow is to drift where the wind blows. Is there an Itanium coming along? Here's your Itanium patch. Is there AMD64? Here's the patch. Mainframe? Patch. Anything else? Patch. Well, how about replacing monolithic kernel with L4 with a set of independent servers? Nope, it's not a patch, so it won't work, sorry, bye-bye.

It's all ok while old design can keep it up, so you are right here. Now tell me what would they do if they were writing open source PC-DOS in 1980? Will they evolve into enterprise Unix or NT 3.1 ten years later ONLY BY PATCHING DOS? Whoa! Sounds funny, isn't it?

If for example the second computer being yours sister's she wouldn't be very pelased to see her task crawling becouse you started ripping your porn :)

LOL :)) Well, distributed ripping of porn is just one of the things that might be useful in future. Porn does not require a cluster, but how about a cluster being used to monitor things around a house? Imagine this scenario - I have a nice house, too nice to be ignored by local thieves, so I buy a set or wireless hidden cams, set them everywhere on the perimeter and plug the video feeds in my cluster. The cluster has those 10Gbit or 100Gbit links, and all my 5 or so home PCs are busy watching out for burglars through all the night. Neat! But only if MS cares about making such a plug'n'play cluster a part of their OS, or this new kernel. I don't wanna spend my life setting up beowulf, too complex and I'm a bit lazy sometimes.

You are right that distributed computing is not very useful for today's tasks, but think where would they use all the wireless toys 20 years ago? There was absolutely no market for this stuff, but now things evolved and voila - radiowaves everywhere! What makes you think that advances in computer vision in 20 years won't make my scenario possible? How about throwing all my house computing resources to rip a nice 150GByte Violet Ray DVD? Takes 6 hours on one 8-core PC or only 1 hour if I click a button on my desktop and tell my Windows "please get all the cores in the house involved, except for my sister's"

Hence my crazy fantasies about clusters and embedding that in a new kernel (or a server above microkernel which may be a better idea). I just project the situation in 1980 onto the situation nowadays and extrapolate it 20 years further.

It's hard to say whether NT kernel in present form is suitable for this kind of future tasks. Maybe it is, but because of the feature creep maybe it is not. If it takes 6 years for the vendor to patch his OS (XP->Vista transition) it may be a sign that too much bloat and patches are in. Maybe NT got so fat and messy because clean design of Cutler has turned into who knows what in Vista? Look, this is what happened with DOS! They got clean design for Intel 8088, then they went up to 80386 by building a whole freakin WinMe mansion on top of the original DOS mud hut. Now look at NT. They've got clean design for 1991 era of PCs, essentially for the same 80386. Where are we 15 years later? We are looking at OS X which is so different from NT, because it was designed from scratch about 10 years later than NT, and it has different everything, especially 3D accelerated PDF-based windowing system which Vista copied as well. So why Vista is so late? Is it because nice clean kernel design was downplayed by moronic managers who couldn't understand what to add and what to change to ship product on time? Or was it because clean design of NT has been patched to some amorphous blob where adding things is real hard? They got very talented, not moronic at all managers and tried to quickly patch XP into Vista. They've got this 6 year delay as a result. This is why I'm asking now - "is it time to avoid another 8 year delay after this one and then 15 year delay after that?" Because I'm afraid Vista got delayed because of the architectural reasons, not because BG was in charge as some say. I'm not sure patching WinMe into WinMe2003 were a good idea, it would take a long time and produce a nightmare, so they dumped it. The same story for Vista - I'm not sure taking this old design forward will somehow prevent significant delays in future. When you patch old mess like WinMe - it is the same as patching old mess called XP and hoping that you can ship nice lean Vista in 2 years. Know what? Doesn't work. 6 years PLUS cut off of many cool features they promised. No WinFS, nothing. Compared with Windows itself it's a progress, compared with OS X it's a failure, BECAUSE... right, you got it! Because OS X was designed much later than NT hence it has a better design, in a sense that it's better suited to modern hardware (come OOON who in their sane mind would fit PDF into a windowing system at Microsoft in 1989?? Cutler would shoot anyone proposing that in the head with a big nasty railgun, I tell ya! see how Jobs beat them now?), and it will age slower than Windows just because of that. Very simple logic, easy to grasp, right?

DirectX API is not meant to perform general computations. It is good for transforming triangles, but reverse discrete fourier transform computations are not made that easily on it, right ?

Right, but there is still out there, which means what? I think it means as GPUs evolve and become more and more flexible, people will find more and more ways to tap their computational power. So why MS wouldn't add GPGPU specs to their DX 11 or 12? Why not, IF there is demand? You are right, there's not a lot of demand now, there's only, there's Havok which uses GPU as a helper engine, not much, yeah. You know what future holds? I am not, this is why I propose crazy ideas, to not let MS be beaten by competitors in future :-)

DOS run doesn't has to run fast, since everything that was written for had been done with slow computers in mind, now we have much faster rigs. Don't know details about MacOS9 emulation so I won't comment on this one.

Virtual machines are everywhere, all old OSes and their apps are always emulated on newer hardware using virtual machines, and this is why PCs are slowly starting picking up traits of IBM VM OS. This is a global trend, and I just extrapolated it in future. Hence my words about this new kernel being IBM VM like, which is sooo far from current NT kernel... you can NOT patch NT kernel into IBM VM competitor. Try to prove the opposite and see how your patchy sandcastle crumbles :)

Maybe because there is no need for it ? What exactly task would you perform on GPU's on network ? Gaming won't do, becouse of latency. Rendering is OK - but it is ALREADY being done by apps, so ... ? And why this god damns DX ?!? If you would like a real "standard" than pick OpenGL - it's platform independent and more suitable for "seriuous" tasks

As for OpenGL issue - MS just doesn't want to lose time waiting when manufacturer A adds a necessary feature B on hardware C, they want things made fast and uniform, click and install experience, and most important they can afford it. Trust me, Apple would dump OpenGL a long ago if they were as big.

What task uniform DX is good for? In current form probably not much, only stuff from gpgpu and Havok, which is not for home users for sure. Still, why not add extensions in DX 11 or 12 which would run some background computations on idle shader units? You have 48 uniform shaders in your GPU, say right now you're not running Quake 6, and 24 units are idling, but you want some DVD decode, video recode, a nonlinear video edit/transition, whatever - bingo, you got that DX waiting for orders.

Your problem is that you see DX as a child gaming thing, which is a big mistake. Look at Apple Aperture, they use GPU in a Photoshop like environment, look at nVidia - they work with video in their GPU hardware. It is not about gaming only. It's about using your monster GPU minicomputer for everything that requires number crunching, EVERYTHING! Uniform DX10, and constant growth of programmability in shaders, and amount of local video RAM - all this points NOT only in the direction of Quake 6, as you're trying to tell me. General trend (GENERAL!) is to make GPU excellent flexible renderer, WHILE using it for whatever possible while not playing. And the number of software titles that use GPUs not for gaming is slowly INCREASING, and it's a matter of time until MS decides that DX now is good not only for video extensions, but could also benefit users with computer vision and speech recognition extensions for example. WHICH USE IDLE GPU FOR COMPUTATION! See what I mean?

is a powerful command line tool (POSIX compliant shell ?) , new file system without constant need for defragmentation (ext3 ? - neee - that would be to good to be true), easy support for other files systems (JFS, XFS, ReiserFS), more flexibility during installation (e.g. more MBR options, more other systems awarness), changing registry in sth. of more ellegant design, enforcing and better handling of multiple accounts (now amost everyone on desktop is using root accounts), more modular design (separating text mode from graphical interface?, those modules you've mentioned...)

For cool text shell - check out Monad, promised to be a wonder. Non fragmenting file system is great, but where did you get numbers supporting your claim that NTFS is more fragmenting FS than ext3? Is there a link with some research about it or you just believe in ext3? ;-) Registry is good enough already, you have to propose justufied changes. What do you want changed there and why? The only thing that could be useful is some automatic background backup for registry, but external utils exist for that, I think. Account separation is enforced in Vista and I'd say enforced too much :) Should decrease number of root users in Windows, I think. As for text-GUI separation - cool idea but MS won't ever care about it, they always made GUI default and Apple did the same even earlier. People who want small fast text kernel with cool shell - they have a crowd of open source OSes to choose from. MS won't go there, just as Linux won't go for dumping text shell and switching to X11 GUI only.

By Bonrock on 4/21/2006 1:42:02 AM , Rating: 2
Most of Thurrott's criticisms about Vista are very valid, and I'm glad he's putting them out there so the Windows team can get some useful feedback. That said, let's not jump to conclusions about Vista just yet. Microsoft still has another six months or so to iron out these issues before release to manufacturing.

Now, if they don't iron out these issues before release, Vista will blow. Some of the issues Thurrott mentions are minor, but the current state of User Account Protection is abysmal. If it stays the way it is now, everyone will just run as an administrator to circumvent UAP, or get frustrated enough to buy a Mac. However, I'm willing to give Microsoft a chance to show that they're aware of these problems and can fix them before Vista's release. It's only fair.

By crystal clear on 4/21/2006 1:57:46 AM , Rating: 2
This question is to be put forward for review only when we have the final version put up for sale.When you have the product on your desk top ready to use-then you ask the above question.
I think its the best way to judge a product than comment on Beta version or some build number.
Dont jump to conclusions,rather wait for the product to be released & then ask yourself the above question to get the right answers.

About Vista.
By drunkenmastermind on 4/21/2006 3:31:29 AM , Rating: 2
I install the beta on my cpu and can not say that I had a woderful time with it. The thing which really rubbed me up the wrong way it that on the folder view button there is no option to show files as a list. This means that files just run down the page, instead of filling the window. Also did not like the way that the folder icons displayed the folder content, to much angle or something. Anyway I don't understand why Bill and his company as profitable as they are...can't put out the slickest OS hands down?

So so so ugly!
By iamright on 4/20/06, Rating: 0
"I f***ing cannot play Halo 2 multiplayer. I cannot do it." -- Bungie Technical Lead Chris Butcher
Related Articles

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki