backtop


Print 20 comment(s) - last by Cheesew1z69.. on Oct 10 at 2:49 PM

There's no specific release date for the new government cloud, but Microsoft said it's coming soon

Microsoft announced that it will release a cloud service specifically for U.S. state, local, and federal government agencies.

The new service -- which was codenamed "Fairfax," but is now officially called Windows Azure U.S. Government Cloud -- was announced during a press briefing in San Francisco yesterday. It aims to provide a safe, separate place for government data.

Windows Azure US Government Cloud will be Azure-hosted in Microsoft's data centers located in Iowa and Virginia, but government customers will still be able to choose public, private or a hybrid solution.


According to Microsoft, all data, hardware, and supporting systems will be in the continental U.S., and data will stay on servers that only contain data from other U.S. federal, state, and local government customers. Also, all operating personnel will be U.S. residents screened for PPT-Moderate clearance.

"The U.S. government is eager to realize the benefits of the cloud, adopting a Cloud First policy for new investments," said Susie Adams, Federal Chief Technology Advisor for Microsoft. "Microsoft is committed to supporting these initiatives and is uniquely positioned to offer the flexibility U.S. government agencies need."

The new government-based cloud service is similar to what Microsoft did for Office 365. The company sells a customized version of its Office 365 app platform for government called, well, Office 365 for Government.

There's no specific release date for the new government cloud, but Microsoft said it's coming soon.

Source: Microsoft



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: what a joke
By eagle470 on 10/8/2013 4:35:31 PM , Rating: 1
What angency was this? I used to work as a consultant and am constantly getting offers for Government contracts, every single one includes windows to some extent. Even NOAA still uses windows systems to some extent.

I also know for a fact that there have been multiple government agencies trying to convert to an all Linux platform and have been failing miserably, mostly because of a severe drop in performance.

RHEL blames VMware, VMWare blames HP and HP blames them both.

Last I checked, people dying because systems weren't up wasn't cheap. But I could be wrong, maybe human life is cheaper than I think it is.


RE: what a joke
By amanojaku on 10/8/2013 7:43:50 PM , Rating: 2
As a former VMware employee, none of this surprises me. What usually happens is this:

1) Organization hears about VMware and reduced server costs
2) Organization gets VMware sales pitch and demo
3) Organization deploys VMware on poorly built infrastructure
4) Organization yells at VMware and claims virtualization doesn't work
5) VMware points fingers at OS and hardware vendor because it's too afraid to say the client screwed up

The issue tends to be a lack of experience on the part of the server and storage admins.

The server admins cram too many VMs on the boxes. The general rule of thumb is 3-5 single-vCPU VMs per physical core. As you add MORE vCPUs to VMs, you get LESS VMs per core. Think about it: you're adding more vCPUs because the VM needs more CPU time, so you need to reduce the number of VMs requesting those CPUs. If the VM has two vCPUs, you should only have two VMs sharing cores. If the VM has four or more vCPUs, you should only have one VM accessing the cores. A sixteen-core box should only have 48-80 single-vCPU VMs, or 32 two-vCPU VMs, or four four-vCPU VMs. These are rules of thumb, and you really should be looking at performance statics from the VMkernel (not the virtualized OS!) to see if any VM is starved for CPU time. I can't tell you the number of times I've seen a four-socket, four-core box running 20 four-vCPU VMs...

Then there's the storage side of things. This is probably the least understood aspect of virtualization: you CANNOT virtualize storage based on DISK CAPACITY. It doesn't matter if your servers only had 30GB of data. What matters is the number of IOPS PER DISK. A SAN cannot make up for a lack of disk IOPS, unless that SAN has enough CACHE to hold the entire array's worth of data. The average desktop has 10-30 IOPS. The average server has 100-200 IOPs. Database and mail servers do WAY more than that. Which is why the average physical server has at least two SAS drives, because they can satisfy 150+ IOPS. When virtualizing, you create another problem: random IO. Since each vDisk is spreading its data across different areas of a physical disk, even sequential access from multiple VMs turns into random access at the physical layer. Imagine, a single 600GB drive @ 150 IOPS hosting 10 30GB VMs that need 100 IOPS each: THAT'S 1000 RANDOM IOPS!

There are two more problems introduced by storage: RAID levels and capacity-based performance. It is common to find RAID levels set to 5, or even 6, in order to protect data. Those levels also add additional IOs per write. Instead, use RAID 10. ALWAYS USE RAID 10 FOR WRITE-HEAVY WORKLOADS. In fact, never use RAID 5 or 6 for anything but a crappy file server that won't be missed. As to capacity-based performance, people don't seem to realize that as a disk is filled its performance drops. Well, gamers know this, what with all the benching we do. The highest performance is found at the outermost portion of the disk; conversely, the slowest performance is found at the innermost. What's the difference? About 50% of the IOPS. The capacity threshold is something like 40% before IOPS drop significantly. Some people push it at 50% full, but a mission-critical system will start to churn at that point. This is why short-stroking was so common in the past. Short-stroked drives are formatted to only half capacity, with only the outermost portion being usable. If you can't get short stroked drives, your storage admin will need to keep track of capacity and add disks to the volume as necessary, or create new volumes.

There's a lot more to tuning virtualized workloads, but this post is already long enough, and experienced admins already know about these things. There are differences in the way that Windows and Linux run under VMware, as well, but you can just read VMware's documentation for that. I've virtualized four-socket, 8GiB RHEL servers running Oracle with NO IMPACT to performance, while adding DR capability and cost savings. It can be done.


RE: what a joke
By Lord 666 on 10/8/2013 10:27:06 PM , Rating: 2
Sounds like you are an old esx 3.5 guy. Almost all of your numbers go out the window once SSDs are used.

The only part that has stayed the same is lack of experience being the limiting factor.


RE: what a joke
By amanojaku on 10/8/2013 11:14:52 PM , Rating: 2
vSphere 4.1, actually. Anyway, show me an SSD that costs as little per GB as a hard disk, especially those from EMC. Small and medium-sized businesses continue to use hard disks. The majority of vSphere installations use SANs with FC and SAS. And don't even try to mention SSDs in a DAS setup. Not only is that expensive, it provides no high availability.


RE: what a joke
By Lord 666 on 10/9/2013 12:10:47 AM , Rating: 2
All depends on architecture and tiered storage. With scaled environments like webservers and replica databases, DAS with SSD will work fine. Plus, the price of a 600gb HP SAS gen 8 hard drive is only $250 less than a 600gb s3500. Throw in the AES encryption, its an absolute no-brainer for the SSD. Load up a G8 with SSDs and make it a iSCSI target for a much more affordable SAN.

Taking it to the next level, install Fusion-IO drives and that chatter about RAID 10 vs RAID 5 completely goes away along with having SAN functionality and HA.

With respect to small and medium businesses, VMware marketed VSA. One of the problems with that configuration was performance hit due to the RAID setup. The trick around that, even though originally unsupported, was to install SSDs instead.


RE: what a joke
By amanojaku on 10/9/2013 11:35:43 AM , Rating: 2
I'm not sure if you're involved in procurement, but your numbers and methodology are incorrect. You're looking at internal server storage, not SAN storage. No responsible admin runs virtualized infrastructure on internal storage. Even if the storage is internally redundant, the server could fail, killing access to that storage.

Even if you did use internal storage, you'd be stupid to buy HP's drives. An Intel S3500 at 600GB is $800; HP's 600GB 15K SAS is $580. Or you could buy Hitachi or Seagate at $300. FYI, HP doesn't manufacture drives, so it's probably rebranded from one of these companies.

You've drunk a little too much kool-aid to be pushing iSCSI (or even FCoE) and RAID 5. Data networks have increased latency in comparison to storage networks. Storage networks have guaranteed, predictable latencies. And RAID 5? Seriously, you enjoy the performance penalty of four times the IO due to partial writes? All enterprise storage companies have recommended against the use of RAID 5, especially for large data applications. In many cases, it's actually unsupported.

And this fixation with SSD is unhealthy for your company's bottom line. You need to balance performance and capacity with cost. The whole point to virtualization is to reduce cost while maintaining performance, and possibly adding high availability. Your proposed solutions are akin to saying that a Bugatti Veyron sold at Nissan GT-R prices is a great deal, when the company only has budget for a Ford Taurus.


"I'd be pissed too, but you didn't have to go all Minority Report on his ass!" -- Jon Stewart on police raiding Gizmodo editor Jason Chen's home

Related Articles
Microsoft Announces Cloud-based Office 365
October 20, 2010, 10:49 AM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki