Continuous integration (CI), the practice made popular by Agile methodologies, has seen tremendous adoption by development teams in the last few years. This wildfire-like spread of the practice has been the result of a common-sense approach to automation and information sharing. But traditional CI has been constrained so that it is localized within the lifecycle and provides only a partial picture of software quality. In this article, I examine the reason for these constraints and suggest approaches for working beyond them.
Continuous Integration Foundation
Continuous integration is a practice made up of two components:
- Team members integrating their work frequently.
- Integration not degrading code quality.
The practice is rooted in the observation that the longer developers go without integrating their work, the more painful the eventual integration. However, frequent integration is only one part of continuous integration and is not sufficient on its own to constitute the practice. To practice continuous integration, the second part of our definition must be met: Each integration should not degrade code quality. Thus, critical to the implementation of continuous integration is being able to determine code quality. This feat is typically accomplished via testing as determining quality is not black and white, but rather comes in shades of grey. The more tests we run, the more accurate our determination of the code quality will be.
There is a force that counterbalances the desire for a clearer picture of code quality. Within continuous integration, it is important to determine quickly whether quality has been degraded. Consider what must happen when we find that the latest integration decreased the code qualitythe offending code change must be either backed out or corrected. In either case, we're talking about another integration and the longer we delay that integration, the more painful it is.
So we find that a balance is needed when practicing continuous integration. On the one hand, the more tests we run, the more accurate our determination of code quality will be. On the other hand, more tests mean longer test times and a longer wait before correcting any offending code change. Typically, the balance is struck by running unit (or fast running) tests as part of the continuous integration build process. This leaves a lot of testing on the table (functional tests, performance tests, regression tests, integration tests, and the list goes on). Can anything be done with these remaining tests?
What About the Remaining Tests?
Any tests not performed as part of the CI loop are typically performed manually or with the aid of scripts or tools. Even when scripts or tools are used, they usually perform only a portion of the work and have to be coordinated manually, leading to a semi-automated process. The end result is that there is a long delay between the time the code changes enter the additional testing stage (beyond CI) and the time that the results of those tests are available and acted upon. Presumably, the closer in time the discovery of a bug to the time of its introduction, the smaller the effort and cost required to fix it. Thus, there is value in reducing the feedback loop in the tests beyond continuous integration.
If we didn't have to worry about keeping the CI loop so tight, we could include additional tests as part of the CI process. Continuous integration has the promise of providing the automation framework that is needed to decrease the turnaround time on the longer running tests. More importantly, many teams that have started out with continuous integration and a fast CI loop have implemented automation of additional tests to provide a more thorough view of the quality of their code base. Running these additional tests is not as fast as running tests used in the tight CI loop and does take a substantial amount of time. The turnaround time can range any where from two hours to more than eight hours. In addition to running slow unit tests, some teams automate functional tests (which require that the application be deployed into a test environment), integration tests, system tests, and more. The possibilities to a large extent depend on the approach taken toward the automation and the model for automating these processes outside the tight CI loop.
In the rest of this article, I present some alternative ways in which this can be accomplished. I will cover a build-centric approach in the section about Staged CI, a process-centric approach in the section about Chaining Processes, and a lifecycle-centric approach in the section about Build and Release Pipelines.