Virtualization can create new problems in the development process and, as VP of engineering for BlueStripe Software, Shane O'Donnell is focused on development life-cycle tools and processes to move core business applications to virtual platforms. He spoke with Jonathan Erickson, Dr. Dobb's editor in chief.
Dr. Dobb's: Will traditional life-cycle management methods do the job when virtualization is involved?
O'Donnell: Traditional software lifecycle management methods are absolutely still applicable, but as with any good process, you may need to switch out the tools you use as the environment changes. Virtualization offers an easy way to reinvent the way "handoffs" work within the life cycle.
We've all seen the handoffs from development to QA, QA to staging, and staging to production. And they've been implemented using some pretty convoluted mechanisms. Virtualization allows those hand-offs to use a single unit of currency -- a virtual machine (VM) -- and avoid the hassles of source control, and differing hardware, configurations, and build environments.
So the process is right, but the tooling changes. If we approach virtualization with open minds, we all benefit.
Fundamentally, virtualization technologies such as VMware and Xen change two basic assumptions around application deployment:
- Applications don't scale as cleanly and linearly as they used to (for those that ever did).
- Input/Output (I/O) paths change and charge a new, higher CPU tax for "off-box," or out of the VM in this case, I/O.
Where we tend to see most folks falling down is in rushing into virtualization off complex, transactional applications. Usually, this is when the virtualization roll-out is chartered to a team that's network- and system-savvy, but blissfully unaware of the complexity of the application, its dependencies, the nature of its remote procedure call (RPC) mechanisms.
Virtualization causes network I/O (and often that includes storage I/O now, too) to be handled multiple times by the same CPU complex. This introduces new CPU overhead that's directly associated with I/O functions. Throw the architecture of a complex transactional application into that environment and you can begin to see why things don't scale as cleanly as one might expect.
The upside is that there are affordable tools available to better understand the complexity of transactional applications. Unfortunately, a lot of the folks deploying virtualization don't know to look at this before they start virtualizing every system they can get their hands on.
Dr. Dobb's: Server virtualization sometimes enables higher workloads -- and sometimes doesn't. When can server virtualization come up short?
O'Donnell: Almost everytime I've seen virtualization deployments fail, it's because the applications (or the lower-level services offered by the OS) are doing much more I/O than the average server -- and no one had done their homework to know that in advance of a P2V conversion. Hypervisor-based virtualization has always suffered from the overhead associated with being "in the way" when it comes to I/O.
So if you have an application that does a lot of I/O and that I/O has a greater impact on CPU utilization in a virtual environment than it did in a physical environment, you should make sure youve done your homework and understand that application inside and out so you can be successful in your virtualization efforts.
Dr. Dobb's: Which applications may encounter problems?
O'Donnell: Usually, we see problems in multi-tier transactional applications. The traffic encountered between those systems almost always surprises. We also see problems where the mid-tier receives a single request and turns around and does hundreds of database requests out the back-end. In some cases, this is expected behavior. In others, where the original query is relatively basic, it's shocking.
Unfortunately, we've seen it happen often enough that we -- and all the virtualization vendors as well -- advocate a best practice of a complete assessment of your environment before even considering a deployment.