backtop


Print 9 comment(s) - last by .. on Sep 1 at 8:50 AM

Poll data comes in on the idea of virtualization

In a recent poll of nearly 2,000 DailyTech readers, worries about performance seemed to be the biggest potential objection to broader adoption of virtualization technology. Of the 1,998 respondents, 37 percent said they agreed with the statement that “virtualizing slows everything down.” Smaller numbers cited lack of redundancy (20 percent), complexity (12 percent), and cost (9 percent).

More than a fifth of the survey respondents – 439 readers, or 22 percent of the survey group – felt that there were no major drawbacks to virtualization technology, voicing their support for the statement, “Virtualization just rocks. Enough said.”

Several readers were vocal in their disagreement with the perception that virtualization can trigger performance issues. “Having worked with virtualization, I can say that performance should be `almost’ a non-issue with virtualization nowadays,” reader solgae1784 wrote in a comment posted to the DailyTech poll webpage.

“I say `almost’ because there are a few applications that (are) not suitable for virtualization,” solgae1784 added, noting that in such cases “you will see `near native’ performance due to overhead - even if the overhead is supposedly very small.”

DailyTech reader Mjello warned that running SQL on a virtualized server can be problematic. “If you have any sort of SQL on (the server), I’d urge you to not trust VM. If the server actually fails completely, hot migration (won’t) work,” Mjello wrote. “VM failover only works fully as long as the machine has something to migrate from. Otherwise it’s like pulling the plug on a normal server and then booting it again. Not a wise thing to do with a SQL under load.”

Lack of redundancy is a nonissue with virtualization, according to reader PorreKaj. “Virtualization can be expanded over several machines. For example, we have a little blade center with four blades running about 12 servers,” PorreKaj wrote. “If one blade blows up, WMware will just automatically assign the virtual servers to the other blades instantly - without interrupting the user.”

While DailyTech reader solgae1784 was generally bullish on virtualization, he did offer the advice that users should think carefully before choosing which CPU to place inside their virtualized machines. “Be warned that newer CPUs are much better for virtualization than the older CPUs,”  according to solgae1784. “The bottom line is, if you're going for virtualization, you may need to buy new servers. Make sure you do your homework to see if the initial investment will pay off in the long run. Then again, your servers may be due for a hardware refresh anyways.”



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: Ah, virtualization...
By MatthiasF on 8/26/2009 10:13:25 PM , Rating: 2
quote:
VMware calls this FT, or fault tolerance, and it is much more complicated than VMotion. The CPU instructions are mirrored on each VM. If the primary VM fails the secondary picks up at the point the first left off. It's more responsive than traditional clustering, but also more wasteful as it "runs" on the secondary host.


No, I'm speaking more to Xen implimentations with centralized management of the hosts. The linux host will be assigned several offline VM's for redundancy that it will keep loaded in some capacity inside of RAM and kept updated over the NFS, so if it had to go online it would be able to pickup the tasks the VM had within a minute or two. While not being as quick a fallover as fault tolerance, it's inexpensive and less a performance hindrance (with the availability of cheap RAM).

quote:
Says who? With quad cores and quad sockets you can get 16 cores in a server. That's enough to run 48 single CPU VMs, 24 dual CPU VMs, and four quad CPU VMs with no performance penalty. The typical dual socket quad core server runs 16-20 VMs on average, with a mixture of single and dual core VMs. One client runs 300 VMs on 16 blades in a single chassis, with clusters spread across four blade chassis.


I highly doubt that in most virtualization environments buying such overpowered machines would be cost effective, nor take advantage of the redundancy in virtualization. Not every situation that virtualization would be helpful involves such performance or redundancy requirements that I'm sure you had to meet in that big client's situation.

quote:
WTF are you talking about? SAS drives are standard in all servers, and have replaced SCSI drives in SANs, as well. FC is slowly loosing market share to SAS. And there is no enterprise SATA drive that compares to SAS, not in terms of performance or reliability.


Performance, no, because they're still using the "SCSI is for servers" mantra of old to sell you overpriced hard drives and not selling high-speed enterprise SATA. Reliability, yes, MTBF are nearly the same for both.

For instance, Seagate's Savvio SAS line has MTBF of 1.6 million hours to Western Digitals RE3 Enterprise SATA line of 1.2 million hours.

So, why spend three times as much for a hard drive that's only a third faster, a third higher MTBF but a third smaller? Exactly as I said, for space-constrained high-performance scenarios. Maybe most of your installations needed it but it's not necessary for the majority of the whole.

quote:
WHAT?!? Ask any storage administrator and he or she will tell you a SAN is THE highest performing storage solution to date, and easier to manage than internal storage. Any bottleneck is the result of poor planning or tight wallets. A key point is that a SAN can support FC, iSCSI or NFS, and some do all simultaneously. In other words, you will always use a SAN, or suffer the consequences of poor performance and limited data mobility.


While today's SAN equipment might be fast, they just aren't necessary anymore for the majority of today's virtualization designs. In fact, several major second generation virtualization initiatives (at big webhosts, CDNs, and Fortune 500 companies) have gone to more independent systems that coordinate without the need for tight centralization. This increases redundancy of the systems, while also bringing costs down greatly at install as well as expansions while also increasing performance.

No longer do they need to spend big bucks on Fiber channel networks on parallel to Ethernet, instead moving to 10Gbe and bringing both networks together.

Most large cloud networks use this method now, including Google.

quote:
That all depends on the company. Anyway, I was presenting an example. Lord 666 figured out that you need to calculate your IOPS before virtualizing.


Always best to do your homework. Probably a good idea to record for at least a month, since some business practices aren't daily, and take an average. Some IT departments record long term data like IOPS.

quote:
Maybe, maybe not. A business that starts with 5 physical servers that cost $3K each will need at least two virtual host platforms at $10K that can run at least 5 VMs. Then you need shared storage, so you go with a low end SAN. The client doesn't have a SAN, having used local storage all this time, so you pick out something that runs $15K (they go as low as $3K, but you're sacrificing storage and performance.) We haven't even bought virtualization software yet, and we're already spending $10K more to build the infrastructure.


I doubt someone running only 5 VMs would bother with a SAN or even buying commercial virtualization software. This lil niche is left to cheap solutions (like Microsoft's Hyper-V) or open source (like Xen) and regular servers if necessary.

Only $30 for Microsoft's Hyper-V server, with the guest license built into the Server 2008 licensing (excluding Web server and Itanium editions).

Xen is free, albeit you'll mostly likely need some knowledge of Linux if you don't buy a pre-packaged solution. I've seen some ISO's setup with Xen in several configurations for free online. Still, probably best to have Linux experience (just like Windows experience is a plus for Microsoft's solution).

quote:
There are definitely other benefits to virtualization, but each case has to be evaluated. I know, I do this for a living.


Yah, there are a lot of us, albeit I've found a lot of early adopter installations that were very poorly designed. The field is still young, so it's good to argue over semantics to help refine things.


“We do believe we have a moral responsibility to keep porn off the iPhone.” -- Steve Jobs

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki