Managing Dependencies
Managing dependencies is the most critical requirement of binary management. If your build-to-release process does not manage dependencies, then you won't be able to clearly see what components were used to create the executables. Yes, you can generate traditional bill-of-material reports that show the files checked out of your SCM tool, but this does not expose what files were found outside of the local build directory. Dependency management and orchestration provides a complete audit trail showing what source code and versions were used to create the final deployable objects. Nothing else can do this. Dependencies can be difficult to trace and often impossible to understand with manual scripts. That's why you need to implement a build-management solution that offers binary-management services to ensure that when the build executes, a dependency-scanning tool watches exactly what is called and used by the compilers and linkers. This is familiar to the mainframe of legacy UNIX teams. Figure 1 is an example footprint that exposes not only the files managed within IBM ClearCase, but also the files not under ClearCase control; notice rt.jar is listed as "Not in VOB." This footprint also exposes all of the environment variables used during the build. This level of footprint goes far beyond what a simple bill-of-material report can show.
If you're a Windows or Java developer and think that the use of ad hoc scripts is the only way to complete the job, well, I understand. I thought the same thing years ago when I was told I could not use my own compile JCL to release my mainframe changes to production. But there are real benefits if you implement a build system that actually managed your binaries:
- Developers using continuous integration builds could benefit from performing truly iterative (incremental) builds. If you have a need for speed, the easiest way to get there is to stop building objects that are already up-to-date. The best way to accomplish a continuous integration build process is to build only the changes each time a build is launched. The concept of continuous integration is not new. Just go ask one of the mainframe old timers. They've been building in an iterative, continuous method for years. They do it bestthey check-out, make a change, and check-in. The check-in launches a build that only builds what has changed. They never need to perform a "clean all" build (and would laugh at the thought). If they did, some large systems could take days to compile. This point alone should pique your interest.
- Simplify changes outside your IDE as easy as inside your IDE. I see Java developers struggling with code and package refactoring outside the IDE all the time. It's incongruent that developers demand their IDEs handle processes such as code refactoring inside the IDE by pushing a button, but when building outside the IDE they are required to manually revisit dozens of Ant scripts to reflect the refactoring changes performed automatically by the IDE. Isn't it time for your build system to be dependent upon the IDE project file, preventing the manual update of Ant scripts anytime refactoring is required? There are better ways.
- Binary footprints expose the problem areas in a fast and efficient way. Distributed developers (Java or Windows) could benefit from what the legacy UNIX and mainframe teams have used as a critical tool for yearsthe ability to look inside the binary by running an "identification" program that shows the precise artifacts used to create the binary. There is no faster way to identify which source code or library broke the build or caused a runtime failure.
Conclusion
If your organization is attempting to meet strict audit compliance but struggles with meeting the separation-of-duties requirement when it comes to the distributed platforms, consider taking a look at how the big boys do it. Legacy UNIX and mainframe developers went through the same growing pains that distributed platforms are currently experiencing. Meeting that separation-of-duties requirement is as simple as implementing a build-to-release management system that supports binary management, and letting someone other than a few core developers build the application on only one or two development machines.
The benefits of addressing the binary management step of the build-to-release process will pay off in both time and money as well as the maturation of the Java and Windows development process. And don't be concerned that meeting this requirement somehow interrupts your lean development techniquesit can actually improve them. Both production control and developers have lots to gain by solving this critical component of the development-to-release process.