Jake Sorofman is vice president at rPath, a vendor of technology for virtualizing software applications and managing the lifecycle of virtual appliances and application images for cloud and virtualized environments. You can contact Jake at [email protected].
I had a recent conversation with an industry analyst who made a stark observation about the state of enterprise systems management. He suggested that the way enterprises construct and maintain software systems today is far more art than science.
His analogy was useful: "Can you imagine building an airplane without a blueprint?"
I, for one, cannot -- particularly as I write this at 35,000 feet.
But his point was clear: software systems are complex and -- in the world of IT operations -- we leave far too much to chance. We fly blind.
Systems are typically cobbled together based on an unrepeatable trial-and-error approach of "making stuff work." Once systems are deployed, there is no way to consistently know what is running inside -- and what impact changes will have.
So, changes are avoided altogether or made with a trembling hand.
Software updates are rarely welcomed events inside the datacenter because the law of unintended consequences is the prevailing force. But the reality is that change happens -- and it happens more frequently than ever as a result of two key factors:
- The systems we deploy are more complex than ever -- an interdependent combination of application and infrastructure components that comprise each running system. These components evolve based on their own independent lifecycle, which means that systems are in a constant state of change -- and, often, conflict.
- Release cycles are rapid and iterative. Agile (and other iterative development processes) makes application development more effective, but it places new burdens on IT operations which is forced to consume change more frequently than ever.
Both of these factors -- system complexity and the rapid pace of change -- are forcing IT operations to the brink. This presents IT leadership with three options:
- Throwing people at the problem -- which, of course, is a wholly unpopular option for resource-constrained, cost-conscious companies today. As a result, the old standby solution of increasing IT overhead to deal with complexity is often replaced by
- Automation of existing processes -- which is popular both in principle and practice, but it risks lulling IT into a false sense of security in what remains a fundamentally flawed process. The problem is the way we create systems in the first place; without a proper model to represent the system, its bill of materials and its specific dependencies, organizations are flying blind and automation simply causes bad things to happen faster. The better option, of course, is to
- Create software systems that are "correct by construction" -- much like complex aircraft are engineered from the start with a deep understanding of parts and interdependency, this approach is about constructing software systems from these same principles. Rather than treating systems as a black box, this approach builds a model that drives the entire lifecycle of the system -- enabling automated deployment and maintenance, taming complexity and putting to rest the law of unintended consequences.
Needless to say, my vote is for option #3.