Johanna Rothman is a software development consultant who focuses on requirements and other software life-cycle issues. Dr. Dobb’s editor Jon Erickson recently talked with her about software quality.
Dr. Dobb's: What's "software quality"?
Rothman: I like Jerry Weinberg's notion of "quality is value to someone." That helps me see who the someones are and what they value. For example, some people want more features. Some people want more defects fixed. Some people want the product faster. Some want all three! When you start a project, you make decisions about which customers you want to please first. This is a business decision. Capers Jones' definition is more concrete: "software that combines the characteristics of low defect rates and high user satisfaction."
If you are looking to retain customers, maybe you focus on fixing problems and adding just a few new features. If you're trying to attract new customers, you focus on adding and making sure you haven't broken anything else.
But when you think about your "someones", and what they value, that means you think about who is your primary customer for now, for this release and you define quality for that person or persons.
Dr. Dobb's: What's your favorite metric quality metric?
Rothman: When I'm using Agile, or defining requirements with user stories, I like acceptance tests so we get the "music and the words", the intent of the story. I use burn-up velocity charts in any incremental life-cycle. They show total requirements, requirements done, and requirements remaining to do. Seeing that trend really helps management realize when the project will be done, assuming we care about quality.
If I'm not using Agile, the project team tends to incur more technical debt, so I like "Fault Feedback Ratio" -- the ratio of good fixes to bad ones. That tells you if you're making progress. I also look at cost to fix a defect, if I'm trying to decide if it's worth fixing problems before release. It always costs more to fix post-release, and you have to look at the business value of fixing before release vs. post-release. Sometimes, the potential revenue is worth the cost of fixing post-release.
Dr. Dobb's: Why does quality software seem so hard to achieve?
Rothman: Quality is not just quantitative, it's qualitative. That's why I like release criteria for projects, so we can agree at the beginning what done means and how to get there. When I use release criteria, I may have quantitative criteria about the number of defects -- I often do. But I often have scenario-based criteria. Something that says, "This scenario runs <with some performance> or <some reliability>.
At first glance, those seem quantitative. They are. And, we chose them based on who the customers were. We, the project team, don't define release criteria as the acceptance criteria for all the features. We are purposefully singling some out, saying "This/These are more important than the other criteria." That's why it's qualitative, not just quantitative.
Dr. Dobb's: What's your favorite metric for measuring quality in software?
Dr. Dobb's: What are the most effective testing methods for improving quality?
Rothman: I really like test-driven development because the quality is in the design. Not every team is ready for TDD. I think of TDD as an advanced skill. It's difficult to think ahead with tests, when all of our professional lives we've been trained to think about design and then test! Without TDD, I'm a huge fan of unit testing, feature, and integration testing from developers. If the developers are ready to pair, pairing. Pairing improves the people's ability to see the whole system as well as improving the quality of the code. If people aren't ready for pairing, then peer review of some sort.
Of course, I believe in automating regression tests (from under the GUI) so you don't have to run the same boring test over and over. Exploratory testing is good for systems where you have a bunch of automated system-level regression tests. It's also good for figuring out where to automate.
I'm actually a fan of testing all the time. Long ago, a client asked me how much time the developers should spend in development vs. testing. I said 50-50, assuming that peer review was part of testing. I now think I underestimated that. I suspect that with continuous testing all up and down the chain, developers spend closer to 70% testing, which feels about right to me.
Dr. Dobb's: Does Agile really make a difference?
Rothman: Agile makes it easier to get feedback, which is a Good Thing. It's so hard to see where you are in a waterfall or phase-gate lifecycle. It's a little easier in an iterative lifecycle, such as spiral, or incremental, such as staged delivery. But the thing that agile really brings to teams is this notion of being done at the end of an iteration.
Knowing that you've finished something, really honestly finished it, is such a wonderful feeling. For product developers, it makes you want to do the next thing, because you really finished the part you were working on. For the people who fund the project, it's a feeling of "Yes, they will finish in my lifetime!" That helps them trust the team, so the trust increases on both sides. Then, when something happens, the two sides can work together to solve it.