The clearest indication that LinuxWorld Expo, held last April in Boston, isn't a gathering for programmers was the decaf soda offered during the breaks. Even the caffeinated soda floating in the tubs was ordinary Pepsi rather than full-metal-jacket coding-session fuel.
Perhaps that's just the difference between corporate development practices and the Other Side?
As with last summer's Linux Symposium, the overall message was that the Linux kernel and its Free/Open-Source Software entourage has gone mainstream. Unlike the early, heady days, strong corporate support and participation is now pushing the code into large data centers and office deployments. This need not be A Bad Thing, but it does invalidate some widespread assumptions about what's important to the coders.
Even though blade servers may seem disconnected from the world of embedded programming, there's an interesting link that hasn't gotten a lot of attention. Here's how some of it played out in Boston.
The Blade of Economics
The notion that an operating system should mediate hardware access among myriad application programs has recently collided with Moore's Law. Standard data-center practice now runs a single application program, perhaps a web or database server, on a single CPU, with the OS relegated to providing memory management and I/O control. Although the system may have an assortment of daemons running in the background, the hardware is dedicated to a single application that can use the entire CPU without contention.
While Moore's Law makes those CPUs cheap and readily available, the real justification comes from reduced system administration effort. Configuring, verifying, troubleshooting, and maintaining a nontrivial server application requires so much effort that the complication of running multiple servers on a single system just isn't worth it, particularly when the underlying hardware occupies a single blade in a server rack.
Most servers have an average load far smaller than their peak load, with numbers at the conference suggesting 10 to 20 percent as typical. Because the hardware literally has nothing else to do, the OS spends much of its time doing nothing at a few billion instructions per second. A corporate data center seems not the place for idle-task apps that fold proteins, crack prime numbers, or search for signs of intelligent life.
That much idle hardware may not have been desirable, but it became indefensible when large data centers collided with another economic realitythe power company. It turns out that electrical power is now the largest expense for many data centers, outstripping even the amortized cost of the server hardware. The total electrical load for large-scale centers is in the multimegawatt range, with essentially all of that power becoming heat that must be removed by chilled-air handlers. In fact, access to power and cooling may limit the number of racks a data center can reload with next-hardware generation.
Here's a useful number: With electricity priced at $0.12 per kilowatt-hour, an always-on device dissipating 1 W costs $1 per year. That wall-wart cell-phone charger that you leave plugged in under your desk costs five bucks a year and your fancy LCD panel burns eight bucks a year when it's turned off. Run your own numbers and see, but you're spending closer to a buck a watt a year than you might think.
On the high end, a 24/7 data center with a megawatt of servers and chillers uses a megabuck of electrical power every year. There may be a bulk discount, but any bean counter worth her salt will object to having only a fifth of that tab produce billable results.
So, if you can't run more than one OS per blade and more than one application per OS, can't get more than 20 percent hardware utilization, and can't afford to waste megabucks on idle megawatts, what can you do?
The answer seems to be virtualizationtime-sharing entire operating systems on a single hardware CPU. Sounds crazy, but it's basically the only way out right now.