More than a fifth of the survey respondents – 439 readers, or 22
percent of the survey group – felt that there were no major
drawbacks to virtualization technology, voicing their support for the
statement, “Virtualization just rocks. Enough said.”
Several readers were vocal in their disagreement with the
perception that virtualization can trigger performance issues.
“Having worked with virtualization, I can say that performance
should be `almost’ a non-issue with virtualization nowadays,”
reader solgae1784 wrote in a comment posted to the DailyTech
“I say `almost’ because there are a few applications that
(are) not suitable for virtualization,” solgae1784 added, noting
that in such cases “you will see `near native’ performance due to
overhead - even if the overhead is supposedly very small.”
DailyTech reader Mjello warned that running SQL on a
virtualized server can be problematic. “If you have any sort of SQL
on (the server), I’d urge you to not trust VM. If the server
actually fails completely, hot migration (won’t) work,” Mjello
wrote. “VM failover only works fully as long as the machine has
something to migrate from. Otherwise it’s like pulling the plug on
a normal server and then booting it again. Not a wise thing to do
with a SQL under load.”
Lack of redundancy is a nonissue with virtualization, according to
reader PorreKaj. “Virtualization can be expanded over several
machines. For example, we have a little blade center with four blades
running about 12 servers,” PorreKaj wrote. “If one blade blows
up, WMware will just automatically assign the virtual servers to the
other blades instantly - without interrupting the user.”
While DailyTech reader solgae1784 was generally bullish on
virtualization, he did offer the advice that users should think
carefully before choosing which CPU to place inside their virtualized
machines. “Be warned that newer CPUs are much better for
virtualization than the older CPUs,” according to solgae1784.
“The bottom line is, if you're going for virtualization, you may
need to buy new servers. Make sure you do your homework to see if the
initial investment will pay off in the long run. Then again, your
servers may be due for a hardware refresh anyways.”
quote: Some current techniques involve peer host systems keeping a redundant copy of VMs in RAM, differenced from the SAN at intervals. So, if one host fails, another host would have a somewhat current copy already loaded and ready to run. Not as complicated as VMotion,
quote: the only way this would be helpful is if the clustering was being done among virtual machines, which would be a pretty big performance hit.
quote: There is no one with any experience building a large server would pick RAID 5
quote: Lastly, no one's going to consolidate 20 servers down onto one server (unless it's super simple webservers like at a webhost). At most you'll plan for 5-6 VMs per server on average, less or more depending on what's happening on the VMs.
quote: Secondly, SAS drives are only meant for space-constrained high-performance scenarios and not SANs. You get more bang for your buck buying enterprise grade SATA than SAS.
quote: Thirdly, you don't build virtualized situations to run right off SANs, since this would create a bottleneck.
quote: As far as your 300 IOPS per server number, I think it seems a bit high. Typical planning IOPS numbers by server would make this hypothetical server quite large. For instance, file servers are typically planned at 0.75 IOPS per user, email servers at 0.30, database at 0.90. So, your example would be a 400 user file server, 1000 user email server or 333 user database server.
quote: Virtualization in all viable scenarios will lower costs. Whether it's the large company going from 700 servers to 80, or just a small company with 4-5 servers that wants to be able to move a servers OS from one machine to the other quickly and easily in case of failures, it will always save time and money when implemented well.
quote: VMware calls this FT, or fault tolerance, and it is much more complicated than VMotion. The CPU instructions are mirrored on each VM. If the primary VM fails the secondary picks up at the point the first left off. It's more responsive than traditional clustering, but also more wasteful as it "runs" on the secondary host.
quote: Says who? With quad cores and quad sockets you can get 16 cores in a server. That's enough to run 48 single CPU VMs, 24 dual CPU VMs, and four quad CPU VMs with no performance penalty. The typical dual socket quad core server runs 16-20 VMs on average, with a mixture of single and dual core VMs. One client runs 300 VMs on 16 blades in a single chassis, with clusters spread across four blade chassis.
quote: WTF are you talking about? SAS drives are standard in all servers, and have replaced SCSI drives in SANs, as well. FC is slowly loosing market share to SAS. And there is no enterprise SATA drive that compares to SAS, not in terms of performance or reliability.
quote: WHAT?!? Ask any storage administrator and he or she will tell you a SAN is THE highest performing storage solution to date, and easier to manage than internal storage. Any bottleneck is the result of poor planning or tight wallets. A key point is that a SAN can support FC, iSCSI or NFS, and some do all simultaneously. In other words, you will always use a SAN, or suffer the consequences of poor performance and limited data mobility.
quote: That all depends on the company. Anyway, I was presenting an example. Lord 666 figured out that you need to calculate your IOPS before virtualizing.
quote: Maybe, maybe not. A business that starts with 5 physical servers that cost $3K each will need at least two virtual host platforms at $10K that can run at least 5 VMs. Then you need shared storage, so you go with a low end SAN. The client doesn't have a SAN, having used local storage all this time, so you pick out something that runs $15K (they go as low as $3K, but you're sacrificing storage and performance.) We haven't even bought virtualization software yet, and we're already spending $10K more to build the infrastructure.
quote: There are definitely other benefits to virtualization, but each case has to be evaluated. I know, I do this for a living.