To illustrate the kinds of flaws that advanced static analysis is capable of detecting, take a look at the report in Figure 1. All examples are distilled from real flaws found in production software.
The code in red is on the path to the error and the column on the left shows the values that the conditions that must hold for the error to occur. Code with a yellow background is where the error occurs, and code with a green background indicates that something directly relevant to the error occurred at that line. In this report, the analysis has found that the variable named state may be null (actually <= 4095 indicates that it is in the 0th page of virtual memory). Following the path back, you can see that this was returned by a call to acquire_state on line 24. Note also that the conditional on line 26 was not true, indicating that the analysis has deduced that the value of acquire_err is REG_NOERROR along this path. That variable also gets assigned to in the call to acquire_state. To see what happens in there, refer to Figure 2 where you can see that on the path taken, *err (which gets passed back as acquire_err at the call site) is indeed REG_NOERROR, and the return value is NULL. Had this path been tested, it would definitely have resulted in a program crash.
This little example illustrates a few key points about this kind of analysis:
- It is interprocedural. It is aware of the call graph and which values can come back when a function is called.
- It is path sensitive, and knows about the relationships between the variables, too. There are other paths where a call to that function returns NULL, and more where REG_NOERROR is passed back in err, but no other paths where both can happen simultaneously. This is importantan analysis that noted that the function could return NULL and then flagged all possible dereferences of the return value as errors would produce too many false positives.
- The analysis is a whole program: It analyzes all the code at once, not just one file at a time.
- Finally, it doesn't require any extra input, either in the form of test cases or annotations.
It is trivially easy to write a static-analysis tool that finds all the bugs in your programone that reports all lines as bugs satisfies this criterion. Similarly, it is trivial to write a tool that never reports a false positiveone that tells you that all lines are bug free will suffice. Obviously, neither tool is useful. The real measure of the effectiveness of a static-analysis tool is how well it simultaneously balances the false-positive rate with the false-negative rate. For almost all nontrivial programs, all serious static-analysis tools that attempt to find bugs report some false-positive results, and none are capable of finding all of the bugs in such programs.
Too many false positives means that you may spend too much time sifting through the chaff looking for the real bugs. This robs your development effort of resources that might be better-spent finding bugs through other methods. A high false-positive rate has a subtle psychological implication too: As it increases, users are less likely to trust the results, and are more likely to erroneously tag a true positive as a false positive.
Different kinds of bugs merit different levels of effort to find, depending on the amount of risk they carry. A buffer overrun in a medical device may be life threatening, whereas a leak in a game controller that means it must be reset once a day is very low risk. The amount of risk determines the false-positive rate that users are prepared to accept. In practice, we have found that for the serious class of flaws, such as buffer overruns and null pointer dereferences, users are often prepared to accept a false-positive rate of as much as 75-90 percent. For less risky classes, a false-positive rate of more than 50 percent is usually considered unacceptable.