Virtualization is all the rage these days. VMware’s IPO last summer netted the company nearly $1 billion—the biggest IPO since Google.
Fast forward a mere five and a half months, and VMware’s stock plunged, leading to such headlines as “VMware Smashed,” “The Party’s over at VMware,” and “VMware – A Wall Street Chainsaw Massacre.”
What changed between August 2007 and January 2008? Not much, truth be told. The market in general was down and VMware did miss its projected Q4 revenue mark. Yet, revenues were still up—way up—over Q4 2006. So, what was all the fuss about? I don’t claim to be a stock analyst, but I believe one of the variables that hurt VMware, and virtualization in general, is the technology is being over-hyped. It will help usher in green IT. It enables disaster recovery and business continuity. It hardens security. It reduces operating costs.
All true, but as any IT pro knows from hard-earned experience, the adoption of new technologies always comes with growing pains. Hyping virtualization as a silver-bullet, plug-and-play technology is false advertising. Successful virtualization projects are vastly more complicated than vendors admit, and the risks associated with a poorly implemented effort are serious.
Risk and New Technology
What, then, are the risks? “Security is an issue,” said Gary Chen, senior analyst,
Most analysts agree and believe that security, while not something to ignore, won’t be a huge issue. The real issue is performance and management. “With virtualization, performance takes a hit,” Chen said. “This will improve over time. Hardware is adapting. Operating systems are becoming virtualization aware, but issues like I/O and application compatibility are real problems.”
A corresponding problem is that many of these performance issues are hard to pinpoint. From an end user perspective, why is the application underperforming? It’s a mystery. End users just know that it’s not on par with what it used to be. Of course, end users aren’t expected to figure these things out. They have IT for that.
But what if IT can’t figure it out either? Today’s virtualization monitoring solutions are blunt tools that can miss key performance variables. Incompatible applications may reside side by side on the same server. Applications may have synchronized traffic peaks that are missed, resulting in micro-saturation. Yet, diagnostic tools will show nothing.
Poor Performance
Without better performance monitoring, we’ll all be nostalgic for the traditional approach of over-provisioning and dedicating single servers to single applications. Moreover, if virtual environments aren’t properly planned, a single server crash could take down multiple business-critical applications at once.
“You have to plan on an application-by-application basis,” said Richard Jones, VP and service director, Data Center Strategies, for the Burton Group. “Some applications aren’t ready, such as Oracle databases.”
In fact, any I/O-intensive application tends to be problematic.
The key word in the performance/reliability discussion, then, is “planning.” “Don’t just plan from the perspective of the OS or hardware, as we did in the past. Plan from the perspective of the application or service you intend to deliver,” Jones added.