Continuous delivery is the natural extension of continuous integration (CI). While the latter aims at running builds after each check-in to provide developers with immediate feedback, continuous delivery has a more sweeping goal. It seeks to build, test, and deploy the final executable with each check-in. (The deployment here is on test systems, not production.) The idea is that at all times a project has an executable deliverable that's known to be safe for deployment. It might not be feature-complete, but it is capable of running.
Continuous delivery is slowly overtaking CI at sites that embrace this form of agile development because it encourages useful best practices in many areas and it removes the problem of discovering unexpected defects during the deployment process. It also makes the team very familiar with deployment and removes the associated the moments of bated breath that deployment entails at sites that rely on traditional operations.
One best practice that is fundamental to continuous delivery is putting everything in the version control system. By "everything," I do mean everything. Here is an excerpt from the seminal text on continuous delivery: "Developers should use [version control] for source code, of course, but also for tests, database scripts, build and deployment scripts, documentation, libraries, and configuration files for your application, your compiler and collection of tools, and so on so that a new member of your team can start working from scratch.
This is a radical position how many of us put our compilers into a VCS? However, it solves a significant problem that arises rarely but can be terribly difficult: recreating old versions of the software. Most anyone who has done maintenance programming has had the experience of not being able to recreate a defect because a change in one of the tools makes the original binary irreproducible. This discipline also provides another benefit: The team can be certain that everyone is using the same set of documents and tools in development. There is no fear that team members overseas are using different requirements or a newer version of the compiler, etc. Everyone on the team is drawing from the same well.
However, fulfilling this mandate is no trivial task. At the recent Citcon conference in Boston, this topic came up for discussion in a session of CI aficionados. The first problem is that many development tools are not a simple binary with a few dynamic libraries, but rather, they rely on OS libraries and must be installed (especially on Windows) to run correctly. This can be remediated in part by use of virtual machines. Set up the OS and the tools as needed for the build automation in a VM, and then check-in the entire VM. This works well, but it requires that you also build the product in the VM, else you have two separate versions of the environment and they will inevitably get out of sync. (Linux and UNIX suffer less from this problem due to their lack of a registry. God bless the tool makers whose products place all the binaries and config files in a single directory!)
A more obscure problem is that not all VCSs handle binaries well. Git, for example, was designed as a pure SCM (rather than a VCS) and has known difficulty handling large projects or those with many binaries. (If you check-in tools and VMs, then your project will automatically be large in SCM terms.) In this realm, commercial products tend to excel. Perforce, especially, is known for having dedicated a lot of work to fast handling of binary files, especially on large projects.
Another challenge is the presence of passwords in scripts. This is partly offset because deployment in the continuous delivery model is to non-production systems, so that leaving passwords to non-production (that is, test systems) probably represents little risk. For other organizations, encryption can provide a solution.
Finally, I should note that even the book I quoted above recommends against storing the binaries generated by a build in the VCS. This makes sense, as binaries tend to be large and numerous, and the whole point of putting everything in the VCS is precisely to be able to recreate those same binaries at a future point.
Personally, I don't think it's possible to put all files in the SCM for every project. Linux-based projects that use OSS tools probably stand the best chance of reaching this goal. However, I believe getting as close as is practical is a valuable endeavor. It enables a sense of security that you can, at any moment, go back in time and recreate older versions of products and that everyone is working from a single source of tools. In my view, these benefits alone outweigh the hassles that the extra discipline entails.
— Andrew Binstock
Editor in Chief
[email protected]
Twitter: platypusguy