Erik Troan is the founder and chief technology officer for rPath.Erik at firstname.lastname@example.org.
Running IT operations would be easy if nothing ever changed. Set it up once, lock the door, and go surfing. Maybe once in a while you'll replace a hard drive or power supply, but, on the whole, large systems will keep turning stable input into stable output.
Of course, there is no such thing as an unchanging environment. Applications change constantly, security vulnerabilities are discovered, hardware fails, load changes … the list is endless. All of this turns the role of IT ops into the role of change management. Some change has to happen (or some change has happened externally), and ops has to manage that change while keeping everything stable.
Three things are involved in change management: the old state, the change itself, and the new state. In order to truly understand what has changed and what is changing, you need to track and control two of the three (since the third is the result of the other two).
Tracking the new state is clearly a good idea; you ought to know what your systems look like right now. Once you can describe the new state, you can capture the old state as well; just make sure you record it before you apply the change. As long as you have a place to record all of these states, you've captured change.
Now take all of those states, give each a version number, and toss them into a single repository. All of your historical systems are explicitly described, and the changes from one to another are implicitly described. You can inspect, query and report on change in your operational environment. With just a touch of automatic provisioning, you can deploy changes from the repository into the operational environment to get controlled migrations. If something goes wrong, you can undo those changes.
All of this depends on an ability to describe (model) what systems should look like. This type of modeling is fundamental to getting change under management.
But the state of the practice in IT today is far different than what I've described. What software deployments should look like is rarely defined at all. What is running and what is changing are equally opaque. And how do you bring a system into compliance or reverse a change to bring it back to a previous state? These are things that are just plain hard to accomplish in IT today.
Of course, the problem only gets worse as scale compounds and change accelerates.Solving this problem -- bringing control to system management and change -- starts with deep system modeling and a version control repository. Once that is in place, dealing with change -- even at massive scale -- is consistent, predictable and economical.
Agree? Disagree? Post a Comment on your thoughts.