backtop


Print 99 comment(s) - last by Dfere.. on May 29 at 8:50 AM


The Army has decided to upgrade all of its computers, like those shown here (at the NCO Academy's Warrior Leaders Course) to Windows Vista. It says the adoption will increase its security and improve standardization. It also plans to upgrade from Office 2003 to Office 2007. As many soldiers have never used Vista or Office '07, it will be providing special training to bring them up to speed.  (Source: U.S. Army)
Army will upgrade all its computers to Vista by December

For those critics who bill Microsoft's Windows Vista a commercial failure for failing to surpass Windows XP in sales, and inability to capitalize in the netbook market, perhaps they should reserve judgment a bit longer.  Just as Windows 7 hype is reaching full swing in preparation for a October release, the U.S. Army announced that like many large organizations, it will wait on upgrading to Windows 7.  However, unlike some, it is planning a major upgrade -- to Windows Vista.

The U.S. Army currently has 744,000 desktop computers, most of which run Windows XP.  Currently only 13 percent of the computers have upgraded to Windows Vista, according Dr. Army Harding, director of Enterprise Information Technology Services.

It announced in a press release that it will be upgrading all of the remaining systems to Windows Vista by December 31st.  The upgrade was mandated by a Fragmentary Order published Nov. 22, 2008.

In addition to Windows Vista, the Army's version of Microsoft's Office will also be upgraded.  As with Windows, the Army is forgoing the upcoming new version -- Office 2010 -- in favor to an upgrade to Office 2007.  Currently about half of the Army's computers run Office 2003 and half run Office 2007.

The upgrade will affect both classified and unclassified networks.  Only standalone weapons systems (such as those used by nuclear depots) will remain unchanged.  Dr. Harding states, "It's for all desktop computers on the SIPR and NIPRNET."

Army officials cite the need to bolster Internet security and standardize its information systems as key factors in selecting a Windows Vista upgrade.  Likewise, they believe that an upgrade to Office 2007 will bring better document security, and easier interfacing to other programs, despite the steeper learning curve associate with the program (which is partially due to the new interface, according to reviewers).

Sharon Reed, chief of IT at the Soldier Support Institute, says the Army will provide resources to help soldiers learn the ropes of Windows Vista.  She states, "During this process, we are offering several in-house training sessions, helpful quick-tip handouts and free Army online training."

The U.S. Army will perhaps be the largest deployment of Windows Vista in the U.S.  Most large corporations keep quiet about how many Windows Vista systems versus Windows XP systems they've deployed.  However, past surveys and reports indicate that most major businesses have declined to fully adopt Windows Vista.  Likewise, U.S. public schools and other large government organizations have only, at best, partially adopted of Vista.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: Missing the point
By descendency on 5/23/2009 6:15:06 PM , Rating: -1
No, they are 80x86-64 (everything from Athlon 64 to Nehalem).

Let me make this perfectly clear, there are two instruction sets that are "common" in CPUs right now (one is FAR more common), x86 and x64. There is ONE architecture implements x64 instruction set, the Itanium (well and Itanium 2). There are boatloads of CPUs (even ones sold at best buy) that support extended x86 (AKA x86-64).

x86 is "32 bit". x86-64 is 32 bit instruction set with ability to address 64 bits (2^64) of RAM.

However, you managed to miss the point entirely. Congratulations. (that consumer level products do not require 4+ gb of RAM and that buying faster HDDs and better utilizing graphics cards would be more important at the consumer level).

Case and point, Adobe Photoshop CS4 and GPU processing. While it isn't implemented for everything, what it is implemented for is TONS faster.

The "more RAM" or "more CPU" solutions are what is holding back computing. There are areas that need far more improvement than those.

Many people here have never written one line of code in their lives and are trying to tell each other how they would improve software. I've written hundreds of thousands. There are many ways to optimize your program (for CPU, for RAM, for energy efficiency, for cache, for GPU, etc...). Right now, it seems like lots of the industry is optimizing for CPU cycles which means the only things that will improve performance are: Higher RAM bandwidth, More cache, or faster CPUs (all three are highly unlikely to just grow exponentially any time soon...).

This is due to one simple thing: the fastest growth in the computing industry over the last 20 years has been clock speed. Over the last few years, this has become a LOT less true.

The only other option is to look for alternative measure of optimization. Parallel tasks are faster on the GPU, therefore better GPU support would be more integral than 64-bit support.


RE: Missing the point
By foolsgambit11 on 5/23/2009 6:56:41 PM , Rating: 5
You've got your terms mixed up. 'x64' is a shorthand (or possibly a non-AMD-specific reference) for x86-64. IA-64 is the Itanium instruction set. They are not compatible, but x86-64 isn't somehow less than 64 bit because of that. It is a 64-bit instruction set with the ability to run older programs designed for previous x86 instruction sets, potentially all the way back to the 8086, I guess. The Pentium's (and 386's) instruction set, for instance, wasn't less than 32 bit just because it was built to be compatible with the old 8086 and 286 instruction set.

But you're right that most computation done on consumer computers doesn't require 64 bits. It's rare you'll be dealing with a number greater than 4.3 billion on a home computer, other than the previously mentioned case of allocating system resources. When we're talking about users' computations, the advantage of natively-computing 64-bit numbers is rarely needed.

However, I think you're wrong that over-engineering computer resources is a problem. Having access to vast computing resources may encourage sloppy coding, but it's not what's keeping people from taking advantage of a 10x+ increase in computing speed. That speed-up is there whether we have 1 GB or 16 GB of RAM. What the additional resources do allow for is sloppy coding. It sounds bad, but it lowers the bar to entry for people who want to develop programs. Anything that allows more people to compete in a marketplace should be good for the consumer overall.


RE: Missing the point
By sinful on 5/23/2009 6:58:36 PM , Rating: 2
quote:
Let me make this perfectly clear, there are two instruction sets that are "common" in CPUs right now (one is FAR more common), x86 and x64. There is ONE architecture implements x64 instruction set, the Itanium (well and Itanium 2). There are boatloads of CPUs (even ones sold at best buy) that support extended x86 (AKA x86-64).


Itanium is IA64. It has nothing to do with x64.


RE: Missing the point
By zebrax2 on 5/23/2009 7:16:03 PM , Rating: 2
whats not needed now does not mean its not needed tomorrow.
as 64bit(or x86-64) slowly take the chunks from the market programmers/developers and hardware vendors will also slowly start giving focus to it. and BTW it does give some performance improvement when the program is coded properly.
vista 32 vs vista 64
http://www.extremetech.com/article2/0,2845,2280811...


RE: Missing the point
By croc on 5/23/2009 7:52:51 PM , Rating: 2
"Let me make this perfectly clear, there are two instruction sets that are "common" in CPUs right now (one is FAR more common), x86 and x64. There is ONE architecture implements x64 instruction set, the Itanium (well and Itanium 2). There are boatloads of CPUs (even ones sold at best buy) that support extended x86 (AKA x86-64)."

There are several RISC based CPU's on the market, not one. Every major 'big iron' MFG has /is making RISC based CPU's. IBM's PPC, Sun's Sparc systems, HP uses a variation of the Alpha, (Not the bastardized Itanium that Intel seems to have abandoned...) Fujitsu has their own CPU's, etc. All are RISC based platforms, and all have been since DEC proved the advantages with the original Alpha.

X86 CPU's are all CISC based, and most modern X86 based processors use a variation of AMD's AMD64 extended instruction set.

Please stop passing mis-information.


RE: Missing the point
By Pryde on 5/23/2009 8:56:16 PM , Rating: 4
The terms x86-64 and x64 are often used as terms to refer to x86-64 processors from any company. There is not such thing as x64.

Intel Itanium (formerly IA-64) architecture is not compatible with x86.

AMD have since renamed x86-64 to AMD64. AMD licensed its x86-64 design to Intel, where it is marketed under the name Intel 64. You can be sure that if Intel lost its AMD64 license that AMD would lose its x86 license.


RE: Missing the point
By rninneman on 5/25/2009 2:40:08 AM , Rating: 3
You are very confused/misinformed about 64 bit computing. x64 is the marketing name Microsoft gave to their software that runs on AMD's x86-64 now known as AMD64 and Intel's EM64T now known as Intel 64. Most developers have adopted the same marketing name. The Itanium series CPUs run IA64 software which is a completely different ISA than x64. Itanium cannot run x64 code, only IA64.

While the x86 architecture is technically CISC, the advent of SSE has morphed the ISA into a hybrid of sorts. To much to go into for this post.


RE: Missing the point
By TA152H on 5/25/2009 3:46:14 AM , Rating: 5
You are worse than he is.

Do any of you people posting have any idea what you're talking about? It's like reading nothing but blowhards that have no clue, but like to hear themselves post. Shut up if you don't know what you're talking about!

To clear up all the misinformation posted by people who obviously are not in the computer industry, we'll start with x86 and the Itanium.

In fact, the original Itanium DID run x86 code in hardware. Intel decided to remove it and use software emulation.

The Alpha is no longer being developed, but was 64-bits from when it was conceived. It is also a very overrated, but still decent instruction set. People who have never used an Alpha, and know essentially nothing about it like to post how much better the instruction set was than everything else. In fact, it was not. They made extremely expensive processors, and spent enormous amounts of money on hand-coding a lot of the logic, making the implementation effective in some cases. In reality, it was always swapping places with IBM's POWER, so was never clearly a superior product. Well, for long anyway. They leapfrogged each other. It was a horrible thing to work with though, and got hotter than Hell. It's still made, mostly for OpenVMS, but HP is not doing further development on it.

Intel has certainly not discontinued Itanium, or even deprecated it. They continue to gain market share every year with the Itanium, and HP has many operating systems that use the Itanium, and are very committed to it. They did just delay the Tukwila again though, which is an ongoing problem, since their current processors are still based on 90 nm technology. But, it is still be developed, and still gaining market share.

Most of all, you people have to stop talking about 64-bits and x86-64 like they are the same thing!!!!!!!!!!! 64-bits is not very important, but x86-64 is not just a move to 64-bits. Were it so, the stupid posts about numbers exceeding 32-bit values would actually not be stupid, but sadly, they are. x86-64 got rid of a lot of baggage, like MMX, x87 instruction set, memory segmentation (which was still around on the 386 instruction set, but since the largest segment was 32-bits, it was transparent if desired, an no one used it), etc... It also added things, like eight more registers, which can have an important impact on performance, especially with L1 caches getting slower (4 clock cycles on the Nehalem). There are disadvantages to 64-bit addressing as well, and it's why the 8088 had memory segmentation in the first place (most people think it was to protect applications from each other, but it was not, and was never used that way). On the 8088(or 8086), you'd just have to specify a 16-bit address, and based on the values in the appropriate segment address, you'd generate a real 20-bit address (Intel used some weird shifting of the index register, and then added it to the offset address). So, you'd save memory by not having to specify the 20-bit address. You'd have to update your index registers, of course, when you needed something outside of the 64K segment you were in, but memory at that time was very small, and it was considered a worthwhile trade-off.

64-bit addresses also consume more memory, and lower code density. Of course, we have so much memory now, it does not really matter. Well, except, it lowers performance, because we have caches now, and lower code density means you hit a slower cache, or main memory more often. And caches can not simply be increased in size to compensate for this ugly side-effect, since the larger a cache is, the slower it is, all other things being equal, so you have to add more wait states (for example, the Conroe's L2 cache was one clock cycle faster than the Penryn's because of the additional size. The Nehalem's is five clock cycles faster then the Penryn's; that's why they made it only 256K - it's also faster because it's not shared between cores anymore).

So, if there were no changes besides going to 64-bit, you'd generally expect x86-64 to be slower due to lower code density - unless you were using more memory, or somehow using very large numbers. For most applications, it would be slower though. That's why you see uneven performance. The enhancements, like 16 registers can improve performance, but the lower code density can lower it. Whichever is more relevant within that workload will dictate whether you see an increase or decrease in performance.

Oh, and DEC did not invent the RISC concept, or even come close to it. Why do you pass this misinformation on, when you clearly have no idea what you're talking about! It's infuriating when idiots do this, and then someone repeats it because they don't realize you're an idiot.

CDC was the first to create a true RISC processor, in a machine called the 6600. It used a 10 MHz processor attached to 10 barrel processors. It could not even load or store data, all this was done by the barrel processors that kept it fed.

IBM also released the RT PC long before the Alpha, which, of course, was a RISC processor. Berkeley might also get offended by you saying Alpha proved RISC's superiority. But, I'll let you do that research on your own, assuming you actually dislike passing on misinformation. There's no evidence of that though.


RE: Missing the point
By amanojaku on 5/23/2009 9:18:10 PM , Rating: 4
quote:
x86 is "32 bit". x86-64 is 32 bit instruction set with ability to address 64 bits (2^64) of RAM.
Sigh...

When people refer to 64-bit CPUs they are referring to the word size , i.e. the size of the CPU registers (data and/or instruction) and the amount of data processed at once. 64-bit data registers yield the following native ranges:

Unsigned Integers - 0 to 4,294,967,295
Signed Integers - -2147483648 to 2147483647
Floating Point - See http://en.wikipedia.org/wiki/IEEE_754
Memory addresses - 0 to 16 exbibytes (colloquially, and incorrectly, referred to as exabytes)
Bus component transfer - 0 to 64 bits transferred between bus components (PCIe slots, CPU sockets, RAM slots, etc...)

There are exceptions, however, generally based on practicality. 16 exbibytes of RAM is inconceivable today due to cost and complexity of manufacture. I challenge you to find any company that CAN manufacture that much RAM globally in a year, let alone for one system. CPU manufacturers reduce memory address space to reflect the limits of available memory (pebibytes) in order to shrink CPU die size. Why bother producing memory addressing for memory that won't exist for a few years, if not decades? As higher RAM densities are produced the CPU address space will increase accordingly.

AMD, IBM, Intel, SUN, VIA, and other companies produce native 64-bit CPUs with the addressing hobbled, and in some cases the bus width is limited to 32-bit. All other features are 64-bit. The x86-64 instruction set architecture includes 32 and 64-bit registers, appropriately activated when the OS chooses an operating mode.


RE: Missing the point
By foolsgambit11 on 5/24/2009 3:55:22 PM , Rating: 5
I think you've got the ranges for 32-bit, not 64-bit, at least for signed and unsigned integer. 64-bit is about 0 to 18 quintillion or so.


RE: Missing the point
By amanojaku on 5/24/2009 5:41:24 PM , Rating: 2
You are 100% correct. Thanks for pointing that out!

Unsigned integer range: 0 to 18,446,744,073,709,551,615
Signed integer range: -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807


"It's okay. The scenarios aren't that clear. But it's good looking. [Steve Jobs] does good design, and [the iPad] is absolutely a good example of that." -- Bill Gates on the Apple iPad














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki