backtop


Print 20 comment(s) - last by Cheesew1z69.. on Oct 10 at 2:49 PM

There's no specific release date for the new government cloud, but Microsoft said it's coming soon

Microsoft announced that it will release a cloud service specifically for U.S. state, local, and federal government agencies.

The new service -- which was codenamed "Fairfax," but is now officially called Windows Azure U.S. Government Cloud -- was announced during a press briefing in San Francisco yesterday. It aims to provide a safe, separate place for government data.

Windows Azure US Government Cloud will be Azure-hosted in Microsoft's data centers located in Iowa and Virginia, but government customers will still be able to choose public, private or a hybrid solution.


According to Microsoft, all data, hardware, and supporting systems will be in the continental U.S., and data will stay on servers that only contain data from other U.S. federal, state, and local government customers. Also, all operating personnel will be U.S. residents screened for PPT-Moderate clearance.

"The U.S. government is eager to realize the benefits of the cloud, adopting a Cloud First policy for new investments," said Susie Adams, Federal Chief Technology Advisor for Microsoft. "Microsoft is committed to supporting these initiatives and is uniquely positioned to offer the flexibility U.S. government agencies need."

The new government-based cloud service is similar to what Microsoft did for Office 365. The company sells a customized version of its Office 365 app platform for government called, well, Office 365 for Government.

There's no specific release date for the new government cloud, but Microsoft said it's coming soon.

Source: Microsoft



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

what a joke
By Argon18 on 10/8/2013 2:17:29 PM , Rating: -1
Government is using Windows on a server now??? No wonder government has so many problems with hackers, viruses, malware, etc.

Last government agency I worked for was all IBM AIX and RHEL. Zero problems. Budgets must really be tight these days if government is resorting to Windows to run servers.




RE: what a joke
By themaster08 on 10/8/2013 2:21:24 PM , Rating: 5
Please shut up with your unrelenting hate towards Microsoft.

We get it, you got a virus on your Windows 95 PC 17 years ago and have hated MS since, but please, move on.


RE: what a joke
By frozenassets on 10/8/2013 2:28:33 PM , Rating: 1
Well, regardless of motive he does have a point.

I support an environment of roughly 1000 Windows servers and 300 IBM Aix servers.

I get paged practically every night for a windows server, hung, needs a reboot, security patching went sideways, application support asking to reboot because of a memory leak etc etc etc.

Very very rarely do I get paged for an Aix box. Even more rare? iSeries, I get paged about once a year.

That said they have different roles. Windows servers for us are the general purpose workhorses meant to run small applications that aren't mission critical. The mission critical stuff is always ran on either Aix or iSeries (AS400)


RE: what a joke
By Ammohunt on 10/8/2013 2:51:39 PM , Rating: 3
Don't blame you inability to manage Windows properly or craptastical software(which their is a lot of in the windows ecosystem) on the Operating system. Purpose for purpose such as in a Database role(if built by people that know what they are doing..like me) modern Windows is just as stable and reliable as *nix.


RE: what a joke
By bah12 on 10/8/2013 4:44:20 PM , Rating: 2
Then you seriously have a problem. I have one Server 2008 R2 box that has been up for 170 days. Either you have very poorly written apps running on the windows boxes, or you have a very poorly designed setup.


RE: what a joke
By Lord 666 on 10/8/2013 4:58:25 PM , Rating: 1
Then you are an irresponsible junior level hack admin.

There have been a plethora of updates for R2 that require reboots.


RE: what a joke
By inighthawki on 10/8/2013 5:36:57 PM , Rating: 2
Pardon my ignorance, since I've never actually run a Windows server before, but can you explain? Do the server variants of Windows receive more frequent updates than the client versions? Otherwise I can't really see how you would need to update any less frequently than once a month (patch Tuesday).

Thanks.


RE: what a joke
By extide on 10/8/2013 6:21:01 PM , Rating: 2
You seem to have answered you own question then, or maybe you are unaware of how many days there are in a month...


RE: what a joke
By inighthawki on 10/8/2013 6:30:02 PM , Rating: 3
He just made it sound from his comment:
"There have been a plethora of updates for R2 that require reboots."

like it requires reboots on a daily or weekly basis or something. I was just clarifying. I wasn't trying to take sides, just a legitimate question to understand the argument is all.


RE: what a joke
By SoCalBoomer on 10/9/2013 3:07:16 PM , Rating: 2
Most of the updates on Server don't require a reboot. And I would say that my rack of Server 2008r2 machines haven't needed an UNSCHEDULED reboot in . . . well, I've had them for 18 months so . . . 18 months.


RE: what a joke
By Cheesew1z69 on 10/10/2013 2:49:00 PM , Rating: 2
Or, he runs this box at home and doesn't want to do the updates?


RE: what a joke
By AMDftw on 10/8/2013 2:34:50 PM , Rating: 2
lol


RE: what a joke
By maveric7911 on 10/8/2013 2:33:40 PM , Rating: 2

quote:
Budgets must really be tight these days if government is resorting to Windows to run servers.


Costs more to run Windows then Redhat on servers.


RE: what a joke
By eagle470 on 10/8/2013 4:35:31 PM , Rating: 1
What angency was this? I used to work as a consultant and am constantly getting offers for Government contracts, every single one includes windows to some extent. Even NOAA still uses windows systems to some extent.

I also know for a fact that there have been multiple government agencies trying to convert to an all Linux platform and have been failing miserably, mostly because of a severe drop in performance.

RHEL blames VMware, VMWare blames HP and HP blames them both.

Last I checked, people dying because systems weren't up wasn't cheap. But I could be wrong, maybe human life is cheaper than I think it is.


RE: what a joke
By amanojaku on 10/8/2013 7:43:50 PM , Rating: 2
As a former VMware employee, none of this surprises me. What usually happens is this:

1) Organization hears about VMware and reduced server costs
2) Organization gets VMware sales pitch and demo
3) Organization deploys VMware on poorly built infrastructure
4) Organization yells at VMware and claims virtualization doesn't work
5) VMware points fingers at OS and hardware vendor because it's too afraid to say the client screwed up

The issue tends to be a lack of experience on the part of the server and storage admins.

The server admins cram too many VMs on the boxes. The general rule of thumb is 3-5 single-vCPU VMs per physical core. As you add MORE vCPUs to VMs, you get LESS VMs per core. Think about it: you're adding more vCPUs because the VM needs more CPU time, so you need to reduce the number of VMs requesting those CPUs. If the VM has two vCPUs, you should only have two VMs sharing cores. If the VM has four or more vCPUs, you should only have one VM accessing the cores. A sixteen-core box should only have 48-80 single-vCPU VMs, or 32 two-vCPU VMs, or four four-vCPU VMs. These are rules of thumb, and you really should be looking at performance statics from the VMkernel (not the virtualized OS!) to see if any VM is starved for CPU time. I can't tell you the number of times I've seen a four-socket, four-core box running 20 four-vCPU VMs...

Then there's the storage side of things. This is probably the least understood aspect of virtualization: you CANNOT virtualize storage based on DISK CAPACITY. It doesn't matter if your servers only had 30GB of data. What matters is the number of IOPS PER DISK. A SAN cannot make up for a lack of disk IOPS, unless that SAN has enough CACHE to hold the entire array's worth of data. The average desktop has 10-30 IOPS. The average server has 100-200 IOPs. Database and mail servers do WAY more than that. Which is why the average physical server has at least two SAS drives, because they can satisfy 150+ IOPS. When virtualizing, you create another problem: random IO. Since each vDisk is spreading its data across different areas of a physical disk, even sequential access from multiple VMs turns into random access at the physical layer. Imagine, a single 600GB drive @ 150 IOPS hosting 10 30GB VMs that need 100 IOPS each: THAT'S 1000 RANDOM IOPS!

There are two more problems introduced by storage: RAID levels and capacity-based performance. It is common to find RAID levels set to 5, or even 6, in order to protect data. Those levels also add additional IOs per write. Instead, use RAID 10. ALWAYS USE RAID 10 FOR WRITE-HEAVY WORKLOADS. In fact, never use RAID 5 or 6 for anything but a crappy file server that won't be missed. As to capacity-based performance, people don't seem to realize that as a disk is filled its performance drops. Well, gamers know this, what with all the benching we do. The highest performance is found at the outermost portion of the disk; conversely, the slowest performance is found at the innermost. What's the difference? About 50% of the IOPS. The capacity threshold is something like 40% before IOPS drop significantly. Some people push it at 50% full, but a mission-critical system will start to churn at that point. This is why short-stroking was so common in the past. Short-stroked drives are formatted to only half capacity, with only the outermost portion being usable. If you can't get short stroked drives, your storage admin will need to keep track of capacity and add disks to the volume as necessary, or create new volumes.

There's a lot more to tuning virtualized workloads, but this post is already long enough, and experienced admins already know about these things. There are differences in the way that Windows and Linux run under VMware, as well, but you can just read VMware's documentation for that. I've virtualized four-socket, 8GiB RHEL servers running Oracle with NO IMPACT to performance, while adding DR capability and cost savings. It can be done.


RE: what a joke
By Lord 666 on 10/8/2013 10:27:06 PM , Rating: 2
Sounds like you are an old esx 3.5 guy. Almost all of your numbers go out the window once SSDs are used.

The only part that has stayed the same is lack of experience being the limiting factor.


RE: what a joke
By amanojaku on 10/8/2013 11:14:52 PM , Rating: 2
vSphere 4.1, actually. Anyway, show me an SSD that costs as little per GB as a hard disk, especially those from EMC. Small and medium-sized businesses continue to use hard disks. The majority of vSphere installations use SANs with FC and SAS. And don't even try to mention SSDs in a DAS setup. Not only is that expensive, it provides no high availability.


RE: what a joke
By Lord 666 on 10/9/2013 12:10:47 AM , Rating: 2
All depends on architecture and tiered storage. With scaled environments like webservers and replica databases, DAS with SSD will work fine. Plus, the price of a 600gb HP SAS gen 8 hard drive is only $250 less than a 600gb s3500. Throw in the AES encryption, its an absolute no-brainer for the SSD. Load up a G8 with SSDs and make it a iSCSI target for a much more affordable SAN.

Taking it to the next level, install Fusion-IO drives and that chatter about RAID 10 vs RAID 5 completely goes away along with having SAN functionality and HA.

With respect to small and medium businesses, VMware marketed VSA. One of the problems with that configuration was performance hit due to the RAID setup. The trick around that, even though originally unsupported, was to install SSDs instead.


RE: what a joke
By amanojaku on 10/9/2013 11:35:43 AM , Rating: 2
I'm not sure if you're involved in procurement, but your numbers and methodology are incorrect. You're looking at internal server storage, not SAN storage. No responsible admin runs virtualized infrastructure on internal storage. Even if the storage is internally redundant, the server could fail, killing access to that storage.

Even if you did use internal storage, you'd be stupid to buy HP's drives. An Intel S3500 at 600GB is $800; HP's 600GB 15K SAS is $580. Or you could buy Hitachi or Seagate at $300. FYI, HP doesn't manufacture drives, so it's probably rebranded from one of these companies.

You've drunk a little too much kool-aid to be pushing iSCSI (or even FCoE) and RAID 5. Data networks have increased latency in comparison to storage networks. Storage networks have guaranteed, predictable latencies. And RAID 5? Seriously, you enjoy the performance penalty of four times the IO due to partial writes? All enterprise storage companies have recommended against the use of RAID 5, especially for large data applications. In many cases, it's actually unsupported.

And this fixation with SSD is unhealthy for your company's bottom line. You need to balance performance and capacity with cost. The whole point to virtualization is to reduce cost while maintaining performance, and possibly adding high availability. Your proposed solutions are akin to saying that a Bugatti Veyron sold at Nissan GT-R prices is a great deal, when the company only has budget for a Ford Taurus.


"And boy have we patented it!" -- Steve Jobs, Macworld 2007

Related Articles
Microsoft Announces Cloud-based Office 365
October 20, 2010, 10:49 AM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki