So Much for Simplicity

Virtualization is exploding in popularity. Virtual machine (VM) deployments are expected to grow from 540,000 in 2006 to more than 4 million in 2009 according to research house IDC. While the benefits are widely advertised, the complexities have not been comprehensively discussed. As a result, there appears to be a prevailing attitude of complacency in the VM arena. Because you can potentially do so much and the gains are often so spectacular, server administrators might not be taking the same precautions on performance management as before.






More Tech Trends on CIO Update

Simplifying the Vendor Selection Process

UNIX vs. Linux: A Vendor’s Perspective

This Isn’t Your Dad’s ITIL

Lessons Learned from Biggest Bank Heist in History

If you want to comment on these or any other articles you see on CIO Update, we’d like to hear from you in our IT Management Forum. Thanks for reading.

Allen Bernard, Managing Editor.







FREE IT Management Newsletters

“There is a perception that VMware VirtualCenter and basic resource throttling is enough, but this doesn’t give the full picture,” said Andi Mann, an analyst at Enterprise Management Associates. “The bundled tools for managing VMs are not enough to guarantee SLAs (service level agreements) based on business performance objective.”

Virtualization, after all, adds another layer into what is already a complex environment. You start with an OS, applications, Web servers, middleware, databases, interfaces, etc., and you add to that a hypervisor layer which is largely deficient in fundamental management tools and capabilities like performance/capacity management. Since it takes only seconds to add a new VM, they tend to proliferate if left unchecked and this creates VM sprawl—an uncontrolled proliferation of virtual machines. The result is you have multiplied the volume of systems you need to manage, increased the depth of management required, and yet have insufficient tools to do so.

According to Mann, the management tools in VirtualCenter and other virtualization platforms do not provide a broad view of performance across multiple hosts and subnets. Nor do they help administrators to understand physical performance issues. They are not really aware enough of applications; let alone the interactions of multiple components in a composite application (with a separate app server, database server and Web server, for example). VM tools also tend to miss the boat with regard to business services and priorities.

“If you do not properly manage performance, you can end up with a single VM overusing or saturating resources in a host,” said Mann. “An overactive application can saturate the channels to the database, using 95% of the network interface, which slows down I/O for all other VMs on the same host.”

But that’s just one scenario. A highly processor-intensive application can saturate the server, using 95% of the CPU. This leaves only five percent for the rest of the applications on the VM. Interestingly, one of the many touted benefits of virtualization, the elimination of under-utilized servers, may actually be one of the consequences of this lack of effective VM management tools.

Under-provisioning, said Mann, tends to happen first, i.e. attempting to squeeze as many workloads as possible onto a single system. “Without accurate performance and capacity tools, under-provisioning is usually the first mistake as administrators and IT managers typically put more VMs on a server than it has resources to deal with,” said Mann. “That leads to over-provisioning as they react by making sure they have spare headroom even for exception cases.”

Virtualization vs. Capacity Planning

The time-honored practice of capacity planning, then, is essential in any virtualized environment. Unfortunately, many incorrectly assume that as virtualization’s popularity increases, capacity management’s value steadily diminishes. The opposite, however, turns out to be the case.

“Despite propaganda to the contrary, capacity planning is more important than it has ever been,” said Jerred Ruble, CEO of TeamQuest Corp. “Technologies such as VMware Distributed Resource Scheduler, utility computing, IBM Workload Manager or grid computing will never eliminate the need for solid capacity planning.”

Such tools provide intelligent dynamic resource allocation, continuously balanced computing capacity, real-time server utilization optimization and automated dynamic reconfiguration. They certainly help manage existing environments, add much needed automation and ensure workloads have appropriate resources. They can also be useful in supplying capacity quickly and easily to meet varying usage requirements. But they don’t tell you what you need, don’t relate well to business goals and don’t help you look into the future.