Maybe youve worked on an ideal project. You had a project plan with a reasonable schedule, you knew your requirements, and you could do reviews and inspections, build test harnesses, and do exploratory and planned testing.
More often, Ive worked on imperfect projects. On these projects, developers and testers feel they dont have the time to do their jobs rightreviews and inspections are done incompletely if at all, only exploratory testing is done, the requirements change, and new features are designed and implemented on-the-fly. There is usually an excuse given for these projectsthey are under severe time pressure.
But even projects under severe time pressure can be closer to ideal. They can be planned and executed if their quality priorities and release criteriawhat success meansare known. In fact, these projects may not need to be under time pressure at all.
Each project is different. Some involve products that are considered high quality because they have low defect levels or a specific customer- or user-requested feature set. Other products are considered high quality if they meet a specific time to market. Most commercial applications and in-house corporate projects have some balanced combination of all three attributes of customer-perceived value. Quality means something different for each project. Those differences change how you develop and test. But whether you work on in-house corporate projects or commercial projects, you have customers to satisfy.
Consider what success means for each specific project. Customers and users perceive a certain value for each application. That value may have several attributes, and those attributes define quality for your project. You can use quality, the value customers and users perceive, to define success criteria for your projects. Once youve defined the success criteria, you can select a development life cycle for your project. A life cycle will help you plan which activities you do and when to create those quality characteristics.
Define Quality for Your Project
At the onset of a project, you can decide what you need to do by taking the following steps:
Define quality for this project. What combination of low defect levels, feature set, or time to market does this project need to meet?
Define release criteria based on your projects quality definition. How do you know when youre done?
Make a plan that gets you to the release criteria. How do you decide how much of which activities you will complete on this project?
Product quality criteria changes over time. Your customers and previous product releases influence the current projects criteria. I use Geoffrey Moores Crossing the Chasm (Prentice Hall, 1991) high-tech marketing model as a way to consider market forces, and combine that with the software project goals in Robert Gradys Practical Software Metrics for Project Management and Process Improvement (HarperCollins, 1992) to come up with a framework for what quality means during a products lifetime. According to my clients, corporate in-house products follow a similar lifetime.
Table 1 combines the market drivers with the project drivers to decide which project quality attributes are important and when. I use this methodology to help drive what quality means to my projects, and then decide which life cycle to use.
Enthusiasts want introductory software releases to do a particular thing well, right away. Early adopters need a specific problem fixed, and they want software that does those things reasonably well, right away.
Early adopters quickly become power users as they use the software.
Mainstream customers and users want the same content as the early adopters, but they need it to work better than the early adopters because they may not be as skilled in using software or in understanding how the software works. They will wait for the software to be officially released, as long as they think it will work for their problem. They will also learn how to use commercial or corporate applications, but they may stay novice users.
Late majority customers will not buy a product or use an application unless you can demonstrate that all of its promised features work reliably. They may remain novice users. Skeptics may buy your product or use your application if you have a good track record with the late majority, and if they perceive your software has features they absolutely require to do their jobs.
Not only does the project have specific customer criteria for quality; there is a bottom-line threshold for defects. Even when time to market is a higher priority than fixing all the defects, most companies prefer not to release damaging products. In the same way, there is an absolute bottom limit to features. A release has to have something new, even if it is just a requirement not to erase disk drives. It might be a minimal requirement, but it exists. Unless management is desperate, it generally decides not to release a damaging or featureless product that could decrease market share or hurt customers. Release-ready software provides some feature or advantage to the company in the marketplace.
Define Release Criteria Based on Quality
Release criteria reveals your projects implicit and explicit critical keys to releasing or not: your projects definition of success or failure. You can define release criteria by deciding what is critical or special to your project. Maybe you have to release by a certain date, need a specific feature or set of features, or you need to track and remove all known defects. Most likely, you have some combination of these concerns. I like to consider the projects time to market, performance, usability, installation, compatibility, defects, and other requirements for the softwares release when defining release criteria.
I recently worked with a client, SmartStore Retail Software, that sells into an early adopter marketplace. It needed to have a usable product in a rapid time to market. The developers were used to developing products for the mainstream, where customers will wait to buy the right product. The testers were used to testing products for a late majority market. Neither group thought to test their assumptions against the project requirements. In addition, only some of the feature requirements were specified.
Until the organization developed and agreed upon release criteria, the different groups were frustrated with each others assumptions about time to market, the feature set, and defect levels. The developers wanted to fix the defects from the previous release before adding new features. Management wanted some new features and some fixes. Testers wanted to focus on performance assessment, not testing new features and fixes. SmartStore could have avoided this non-agreement by defining requirements up front.
Some of SmartStores final release criteria were:
Load tests 1, 2, and 3 run at least 5% to 20% faster than they did in the previous release.
Load tests 4, 5, and 6 run at least 20% to 40% faster than they did in the previous release.
There are no open, high-priority defects.
All data exchange functionality is complete (architected, designed, implemented, debugged, and checked in).
Ships by January 15 (this date meant something specific to this organization and it forced an aggressive project schedule).
For this organizations project, this criteria took the place of specific and traceable requirements. (I dont recommend ignoring requirements and focusing solely on release criteria.)
The criteria focused on getting a reasonable product to market quickly. We negotiated the release criteria with developers, testers, marketing, customer service, and senior management. We all had to agree on the characteristics so we could decide how best to accomplish our goals.
I drafted the initial release criteria. At a project team meeting, I discussed the criteria with the developers and testers to make sure they agreed with me. We discussed each criterion, and whether we thought we could make the criterion by the ship date. The discussion grew heated, so we kept on track by asking these questions for each criterion:
Must we create this functionality or make this performance?
What is the effect on our customers if we do not create this functionality or make this performance?
By the end of the project team meeting, we agreed on the criteria and I presented it to the operations committee (senior management, customer service, and marketing management). The committee wanted more functionality, but reluctantly agreed that we were creating a project that provided what they needed.
The developers and testers had to change their actions to create a reasonable product quickly. The developers could no longer just fix existing defects; they had to figure out a way to add more features quickly. Testers couldnt use traditional tools to assess performance, they had to speed up their work to assess performance in addition to testing new features and verifying fixes.
SmartStore had to achieve a specific, short time to market, with a feature set that included performance. Data-damaging defects were not acceptable, but some defects were O.K. SmartStore selected a design-to-schedule life cycle. It completed the architectural design for the next few major releases, and prioritized the development work. The features were divided into three categories: must, should, and walk the dog first. (Each priority was revisited at the planning for the next release.) Some of the must work was reflected in the release criteria. Should work was not usually mentioned in the release criteria, and the walk the dog first work was not mentioned at all.
The testers and developers initially categorized and prioritized the work as they found and entered defects into the defect tracking system. Every day, the engineering management team reviewed and verified the priority of the open defects. We decided on each defects priority based on how the defect affected the customer or affected our ability to meet the release criteria.
For each of the items in the must category, developers did design reviews, code inspection, and unit testing. The testers planned their testinghow much exploratory testing, how much test automation, and when to start regression testingand had the developers review their test plans.
For the should category, the developers tried to do design reviews and code inspections, but more often they ran out of time. The testers tried to plan and develop regression tests, but they always ran out of time. Walk the dog first features were never planned or implemented.
SmartStore used a life cycle that helped it reach its release criteria. It knew what itd get out of development, and it got what it wanted.
Use a Life Cycle that Helps You Reach the Release Criteria
Your projects life cycle defines whether your project will be ideal or imperfect. When the projects life cycle does not match the projects quality priorities, project imperfection occurs.
Different life cycles and techniques have varying effectiveness in terms of the goals of time to market, feature richness, or defect control. Table 2 is a comparison of several life cycles and techniques, their strengths, and their product quality priorities. For detailed descriptions of several software development life cycles, see Steve McConnells Rapid Development (Microsoft Press, 1996).
No life cycle is truly appropriate for a first priority of low defect levels. People tend to think about feature sets with an attribute of extremely high reliability or low levels of defects. Especially in safety-critical or embedded systems, the reliability is really part of the feature set. If the product doesnt work reliably, it just doesnt work.
When you decide on your first priority of quality (time to market, feature set, or low defect levels), you can deliberately choose the most appropriate life cycle to support that definition of quality. Sometimes you might choose a combination of life cycles to meet the mix of quality attributes needed at the time. In essence, you can tailor the life cycle to produce the kind of quality that matches the overall business requirements.
Providing Quality Applications
Not every project needs to perform all of the best software engineering practices equally. (Successful projects do spend enough time on design.) Especially when you have market pressure for time to market or feature set, choose which activities to perform and when. You may choose to test an application using only exploratory testing. You may choose to inspect only certain code. You may choose to only review some of the designs. As long as you know that you will not achieve perfection with these trade-offs, but you will meet your success criteria in your releases, you will provide value to your customers and users with quality applications.
If you spend a little time defining your projects quality priorities, and then choose release criteria to reflect those priorities, you can select a life cycle for your project based on those priorities. When you choose that life cycle, you can have a working environment that makes for an ideal project.