Is Virtualization Real?

Virtualization abstracts out things — like hardware, operating systems, and applications. So what's the big deal?


November 02, 2006
URL:http://www.drdobbs.com/architecture-and-design/is-virtualization-real/193501439

THE LONG-DELAYED VISTA RELEASE will be the last monolithic operating system release we'll ever see from Microsoft, according to Gartner Group analysts. To keep Steve Ballmer's promise that Microsoft "will never have a five-year gap between major releases," as it did with Vista, something will have to change, and these analysts think they know what that change will be: They predict that in future Microsoft operating system releases, functions of the operating system will be split into chunks that run simultaneously in virtual machine partitions. In particular, a legacy kernel for running old applications will be split off from, and run in parallel with, the newer functionality—through virtualization.

Meanwhile in another part of the computing universe, thanks to Parallels Desktop (www.parallels.com), Vista itself is already running with impressive performance on Intel-based Macintosh computers. Remarkably, Apple even seems to be promoting Parallels over its own Boot Camp technology (www.apple.com/macosx/bootcamp/) as a solution for running your favorite flavor of Windows on a Mac. Several other vendors, possibly including Microsoft despite its claims to the contrary, are making this Windows-on-a-Macintosh market competitive and lively, because this technology removes one of the barriers to buying a Mac—through virtualization.

In the first six months of 2006, Virtualization.info, a relentless tracker of all things virtual, counted no fewer than 65 significant virtualization-related software releases. These range from SWsoft's award-winning operating system-level virtualization product, Virtuozzo 3.0 for Linux (www.swsoft.com), announced in early February, to Parallels' virtual hard-disk management tool, Compressor 1.0, announced in late June.

These are just a few data points that demonstrate that virtualization is currently a hot technology topic. Not only is virtualization itself hot, but it impinges on other hot topics: server farms, Quality of Service (QoS), high-performance computing (HPC), and software portability, to name a few. Virtualization touches such a broad range of other technologies partly because the word encompasses a broad range of ideas: It has been an umbrella technology.

Virtualization's Railroading Moment

Here's one definition of virtualization:

The process of presenting a logical grouping or subset of computing resources so that they can be accessed in ways that give benefits over the original configuration.

wikipedia.org/wiki/Virtualization

Here's another:

A framework or methodology of dividing the resources of a computer into multiple execution environments, by applying one or more concepts or technologies such as hardware and software partitioning, time-sharing, partial or complete machine simulation, emulation, quality of service, and many others.

www.kernelthread.com/publications/virtualization/

By yet another, more succinct definition, "virtualization abstracts out things." Or maybe that's a little too succinct.

But virtualization does abstract out things, and this abstraction can occur at different levels. Hardware virtualization exhibits different degrees of granularity and isolation compared to operating system-level virtualization, and is appropriate for different uses. Application virtualization is yet a third level of abstraction, with different degrees of granularity and isolation, and different uses.

Those uses include abstracting away the arbitrary boundary between one server or hardware resource and another to make near-optimal use of such resources; isolating untrusted applications; containing errors and faults; saving context for disaster recovery; debugging operating systems; running legacy applications; increasing application portability/mobility; simulating specific hardware configurations, such as a network of computers; running multiple operating systems simultaneously, for example to test web applications; and various uses in training and testing and research.

It's worth pointing out that virtualization is hardly a new concept in computing. IBM has had a huge role in the 40-plus-year history of virtualization, from the IBM S/360 Model 67 mainframe to its ongoing work today. IBM's early virtual machines were clones of the underlying hardware running on a component called the "Virtual Machine Monitor" (VMM), which ran directly on the hardware. And virtualization has been available in various forms ever since. Windows NT had its Windows on Win32 virtual machine for 16-bit Windows and other "execution environments." And the Java Virtual Machine implements another kind of virtuality, as did the UCSD P-System in the '70s and '80s.

But fast microprocessors and new computing needs such as server farms may be bringing virtualization to its railroading moment (as in Robert Heinlein saying "when it's time to railroad, people start railroading").

Virtual Servers and Systems

Microsoft, IBM, Sun, and Hewlett-Packard are all investing time and money in virtualization technology, especially server- or operating system-level virtualization. Various approaches can be employed in server and operating system-level virtualization:

Emulation is an ambitious approach, in that the virtual machine (VM) simulates the full hardware of the "guest" machine and thus lets you run unmodified copies of the operating system and applications for a different CPU. Examples include Microsoft Virtual PC for the Power PC, which emulates an x86 machine on Power PC hardware, and PearPC, which emulates in the other direction. But emulation is typically very slow; 10 percent of native speed would be considered good. Some people do not consider emulation to be virtualization at all, but we're taking a catholic view here.

Native virtualization does less, in that it requires the host and guest CPUs to be the same. This buys it a lot in speed as compared to emulation, and native virtualization implementations typically run at native or near-native speed. Native virtualization lets you run an unmodified copy of a different operating system. Examples include VMware, Parallels Desktop, and HP's Integrity Virtual Machines.

In the approach often called "paravirtualization," the hypervisor—the software agent that manages the partitioning of the system—collaborates with the operating system in some of the operating system roles, to make the process more efficient. The VM presents an API that expects system calls from the (modified) operating system. Paravirtualization is typically a native or near-native proposition. Example implementations include Parallels Workstation, Xen, and IBM's VM.

In operating system-level virtualization, the guest applications not only have to be written for the same CPU as the host, but for the same OS as well. The same OS kernel serves all the OS environments, which appear to be separate servers of the same type as the host. Performance is at native speed. Examples include User Mode Linux, Solaris Containers, and FreeBSD Jails.

Server- or operating system-level virtualization offers all sorts of benefits, not limited to expanding Apple's customer base (although would be a nice benefit to Apple). IT managers can use server virtualization to more efficiently manage resources; for example, by eliminating a server dedicated to the sporadic running of some legacy application. And the ability to partition off the resources of the system by creating several virtual servers with different permissions can solve a lot of security problems.

Virtualizing the Application

But what's really getting a lot of attention right now is virtualization as applied to the application. Again, there are various approaches.

One approach is not new—the application virtual machine approach. The goal here is to let application binaries run on different computer operating systems and architectures. Applications are compiled to a standard portable binary format, and a VM installed on the target machine executes the binary. Highly familiar example implementations of this concept include p-code in the UCSD P-System and more recently, the Java Virtual Machine (JVM) and Microsoft's Common Language Runtime (CLR) for .NET.

But while the use of application virtual machines might be called "application virtualization," that term is now being commonly applied to an approach that is only now becoming practical, due to fast LANs. This is on-demand software delivery through application streaming.

Application streaming contrasts with server-based computing in that, in application streaming, the application runs on the desktop, not on the server. But it is not installed on the desktop; it streams, in an on-demand fashion, to the workstation.

For this to work, there needs to be software running on the workstation that implements a sealed-off virtual environment, a sandbox, in which the application executes. This is the virtualization.

Once you've started running the application—and chances are you'll only need 10-15 percent of the code to run—more software components can download in the background, and components you aren't using now can be cached locally and used again.

One benefit of this is that the application executes at native speed, since it is running locally. Another benefit is that, once the application has arrived at the target computer—or better make that "device," since it need not be a workstation—the device can run the application without being online. And the load on the server that streams the application is less, so more clients can run the application at once—or fewer back-end servers will be required to serve the same number of clients.

Among the companies active in the application streaming area are Softricity (www.softricity.com); Thinstall (whose name reflects the company's roots in the older thin-client paradigm, www.thinstall.com); Appstream (whose name is clearly more trendy, www.appstream.com); Ardence (www.ardence.com); and Trigence (www.trigence.com).

Microsoft and Apple

But back to Microsoft and Apple, because—well, because technology writers are required by a little-known law to obsess about those two companies. You probably suspected it all along.

A few points about virtualization on Macs: VMware (www.vmware.com) should be vying with Parallels for attention in the Windows-on-Mac market by the time you read this. And there are those who think that Boot Camp is only a placeholder for Apple's own virtualization product—unlikely, one would think. Boot Camp, which lets you boot Windows natively on Apple hardware, is a nice complement to the virtualization solutions and absolutely the best way to run PC games on a Mac.

In the flurry of attention being given to third-party virtualization products, it shouldn't be overlooked that Apple's Rosetta technology represents another kind of virtualization. Rosetta was developed for Apple by Transitive and is based on its QuickTransit software. It lets applications written for Apple's Power PC machines run without modification on the new Intel-based machines and works by on-the-fly instruction translation. It works surprisingly well for what it has to do; or to put it another way, it's a dog, but a surprisingly frisky dog. Again, some would not call this virtualization.

Although the various virtualization solutions for running Windows applications on Macintoshes no longer include Microsoft, the company was still claiming in September that it was looking at the possibility of supplying a version of Virtual PC for Apple's Intel-based Macs. Some of us will believe it when we see it.

On the Microsoft front, Softricity was swallowed by the Beast about a year ago, and Microsoft is on record that it is pursuing virtualization in Windows at all three levels—machine, operating system, and application—and the Softricity acquisition supplies them with the third of those three levels.

Another virtualization company that Microsoft seems tight with is Xen (www.xensource.com), and Xen's Simon Crosby takes their technology swapping and cooperation to mean that Microsoft has acknowledged that his hypervisor "is the hypervisor to beat."

Gartner claims that Microsoft challenges the scenario they outlined (and that was sketched in the first paragraph of this article), but in fact Microsoft has acknowledged that operating system partitioning through OS-level virtualization is an important component that will come to market in a few years.

So didn't they agree with Gartner? At least virtually?

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.