To address assurance from an agile perspective, the practices in Table 4 should be followed.
|1. When a requirement cannot be effectively verified using execution tests, use the design (not the code) as the focus of evidence of correctness and completeness.|
|2. Ensure that requirements that are collected include assurance objectives.|
|3. Use test-driven development and other techniques to continually verify compliance of the implementation with ongoing design (the AM effort; see below) as well as with requirements.|
|4. Augment 3 with randomized testing to empirically assess actual assurance.|
Thus, for assurance-related requirements, our focus shifts to design rather than implementation, leaving implementation as a concurrent activity. A software implementation is merely a low-level design, and if a high-level design must remain authoritative for some aspects, then the low-level design is, by definition, subordinate to it for those aspects. We use the term "agile assurance" (AA) to refer to our approach.
To explain this approach, we discuss its implementation within the context of XP. Let's take the elements of our agile assurance approach, one by one.
Design as Evidence. Approach 1 in Table 4 states that the design is a focus on an AA effort. A design is not an objective in its own right; rather, a design is an instrument for verifying that assurance is achieved. Further, a design must be adequately documented, to the extent necessary to achieve assurance objectives, including proving that the code agrees with the design intent, and ensuring that future maintenance also complies with design intent. For this to be achievable, the design must be maintained in a concrete and durable form, such as unit tests, a document, code comments, rules specified via a design language, or any means appropriate for a project. Thus, because a high-assurance design must be sufficiently documented, in the rest of this article we use the term "design" as synonymous with "design specification."
Here, we propose an approach for capturing design (www.agilemodeling.com/ essays/amdd.htm). Agile Model Driven Development (AMDD) is the application of the principles and practices of AM on an agile project.
Identification of Stakeholders and Requirements Collection. In AMDD, stakeholders are defined as anyone who is a direct user, indirect user, manager of users, senior manager, operations staff member, support (help desk) staff member, developers working on other systems that interact with the one under development, or maintenance professionals potentially affected by the development/deployment of a software project (www.agilemodeling.com/essays/ activeStakeholderParticipation.htm).
Requirements such as security, reliability, failure recovery, disaster recovery, maintainability, and manageability cannot be treated as implicit. An agile design in a high-assurance environment requires that all assurance requirements be expressed through stakeholder stories. This requires that all of the application's stakeholders must be represented in collecting requirements. Those who have the most dynamic requirementstypically the end usersshould be met with each iteration.
Continual Design Verification Augments TDD. Agile proponents know the value of test-driven design (TDD), in which a test is written and then enough production code is written to fulfill that test. Again, a primary purpose of a design should be to explain why certain features exist in the application. AMDD addresses TDD by differentiating between high-level design, which utilizes agile modeling, and detailed design, which uses TDD to define detailed behavior.
An agile design must be continually verified, in the same manner that the implementation must be continually tested. Verification must occur from two perspectives:
- Agreement between the design and the implementation (for instance, as-built equals as-designed).
- Verification that the design meets assurance requirements.
Empirical Testing. The very nature of assurance requirements, in that they tend to be negative requirements specifying that something cannot be done, means that it is difficult to create a high-coverage test suite. This means that some amount of empirical testing, with inputs that are randomly chosen, is necessary to measure the actual soundness of the application so that it can be compared with the designed soundness.