Good Enough Knowledge of Software Quality

Code coverage, defect rate, and risk paint a fairly accurate view of the quality of a company's software-development process


December 18, 2006
URL:http://www.drdobbs.com/architecture-and-design/good-enough-knowledge-of-software-qualit/196700567

Compliance with Sarbanes-Oxley (SOX) and related regulations requires senior executives at public companies to re-examine their internal systems and controls. SOX compliance has focused on financial controls, accurate revenue recognition, and reliable balance sheet information, all of which are essential to sound corporate governance and risk minimization. But what are the real risk factors behind corporate performance and business success?

In spite of the ballyhooed scandals such as Enron and Worldcom that led to the SOX legislation, wrongdoing on that scale is comparatively rare. The greater risk -- and indeed the greater opportunity for consistency and transparency-arises from prosaic day-to-day business activities. How an organization develops software is particularly important in this regard -- since the success of most organizations depends on how well they create and adapt the software on which their business processes or products depend. (Outsourcing is no panacea. You may outsource the work and outsource many of the headaches, but problems in the resulting software can still put your organization at risk.)

Most B2C companies today derive a substantial portion of their sales from their web site, so downtime at the web site because of software glitches can cause enormous losses. Consider, for example, that Amazon.com averaged almost $800,000 per hour in sales for 2004, so any glitch, regardless of duration, produces significant loss. Glitches in business software can be equally devastating. In mid-2004, for example, Hewlett-Packard suffered very disappointing quarterly results due in large part to problems in supply-chain and order-processing software, according to the then-senior management. In other cases -- as in the outage at salesforce.com, a firm that hosts applications for other businesses -- the cost does not translate immediately into lost sales but into a perception by existing and prospective customers of increased risk.

Whatever the cause, business stoppages due to software errors are very, very expensive. Large companies with multiple product lines can recover from these software errors, but smaller firms sometimes suffer setbacks that require years to recover from completely.

Because the cost of these failures is so high, senior management is well served by systems that provide precise, objective, real-time data on the quality of the company's software and the effectiveness of its software-development activities.

Most companies today provide real-time data on internal processes only for their manufacturing activities, as this area has a long tradition of monitoring quality via real-time quantitative measures. However, the metrics from a factory floor have few counterparts in software development. This is due to the substantially different nature of the processes: Manufacturing aims to perform the same activity repeatedly with every result being the same, whereas software development attempts to create individual (and hence differentiated) deliverables with as few defects as possible. The question then arises, what metrics can senior management track to monitor the quality of in-house software? And how much data is really needed?

What To Know?

Traditionally, senior executives hand off the responsibility for monitoring software quality to the company CIO. They expect him/her to report periodically on the status of various projects and initiatives and to provide some assurance of software quality. Under new models of governance, however, this delegation of responsibility is being supplanted by a structure that requires the CIO to report software quality metrics on a regular basis to the company CFO.

The choice of metrics to monitor is important, as the data must correlate to software quality and it must be comprehensible to the CFO and other executives. Typically, three measures provide useful and intelligible quality-tracking monitors:

Before discussing the third measure, let's see where these two measures take us. Consider that you are senior management at a company that is about to deploy an important package that was developed in-house. In the traditional scenario, your level of confidence in the software probably depends mainly on your level of confidence in your CIO. All project data and explanation comes from the CIO, anyway, so you know your view into the project has already been conditioned by the IT reporting structure. At deployment time, you're forced to hope that the job's been done right.

Now consider the newer model in which you've been using a dashboard to track coverage and defect counts with precision. This information comes directly from automated processes, so it's the same raw data that IT sees. You have hard numbers and you have reports from the CIO about problems that have occurred along the way. You now have a basis of comparison with previous projects and a good sense of what the issues have been as the project moved ahead. You also have a feel for how much testing has occurred and the results of those tests. Aren't you in a much better position with respect to your knowledge of company systems and the likelihood of a successful project? A third metric can be used to give you even more data and increased confidence in your software.

Table 1: The risk associated with ranges of cyclomatic complexity.

Projects should aim to have 75 percent or more of their modules have a complexity of less than 10, and allow very few modules with a complexity above 20. Anything more than this is likely to indicate convoluted code that is difficult to verify. In other words, it is risky code. If it is in a very used routine, the risk increases commensurately.

These three measures -- code coverage, defect rate, and risk -- paint a fairly accurate view of the quality of a company's software-development process. They are, however, only the basics. Other measures such as load tests, open issues in the defect tracking system, and other criteria can fill out the picture even more.

It's important to note that tracking these measures is not a guarantee of success. In the same sense, no metric or group of metrics provides guarantees of ultimate quality. A program can enjoy 100 percent test coverage, no defects, no open issues, and simple code routines yet still fail badly. However, such failures tend to be rare. And statistically, projects with less-favorable metrics will fail more often. So, the measures do provide visibility into the overall quality of software development at the firm.

Their value increases when metrics on current projects can be compared with those of projects past. So, to establish the maximum context for the results, the data should be kept and used to validate new numbers.

Measuring Quality as a Sales Tool

So far, we've been examining the use of quality metrics as a bottom-line activity. We presented monitoring of development processes as a means of reducing the costs of preventable bugs. However, these metrics also have a top-line aspect: Showing prospective customers your metrics enables them to see your commitment to quality. My company, Agitar Software, for example, posts the results of tests run on internal projects and products directly to our web site. We have found that doing so gives our customers considerably more confidence in the reliability of our products and it provides leverage against competitors: Why aren't they posting their results? Could it be because they don't perform rigorous testing or because they don't want customers to see the results? Either way, quality talks, and if competitors have nothing to say, then the onus to prove quality is squarely on them.


Jerry Rudisin is CEO of Agitar Software.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.