Tonight I was reading about virtualisation and questioning if it’s ready for production use. The answer is, yes, maybe, and no.
Like everything else in the data center, the risk versus value has to be considered. It’s not just virtualisation itself, it’s all of the supporting infrastructure too. Virtual servers, hardware configurations, power, it’s all part of the answer.
So what are the failure points in a virtual server or virtual workstation? Judging by recent events such as the Rackspace cloud outage, the biggest vulnerability is power. Data centers that run their power operations in a risky manner (manual testing, unprotected power) create a large threat to virtual systems.
Next failure point is the hardware. The hardware that underlies virtual systems is substantially similar to that used for single purpose servers and subject to the same frailties. Mechanical hard drives and fans are the most common failure points.
From the software standpoint, guest operating systems should be as reliable running on a hypervisor as they would be expected to be on real hardware, as should the host operating system. I would also expect any of the major hypervisors to be as reliable as any mature software.
So, a lot of the things that insure the reliability of virtual systems are the same as for traditional servers, just more so, since there is more at risk on a server handling many virtual machines. Power stability, both in design and operation, hardware quality, redundancy, and durability, and good choice of stable software all add up to a good reliable virtual server.
Of course, good choices, such as having redundant virtual servers split between multiple physical servers, multiple power sources, and multiple network connections.
Make the right choices and you can feel comfortable virtualising even the most critical application.
Vern, SwiftWater Telecom