Beyond The Requirements
Most security vulnerabilities aren't requirements violations; rather, they come from incomplete requirements. For example, a simple requirement takes the form of "when a user applies input x, the software should produce output y." It's easy to test such requirements: Apply input x, then look for y.
Verifying requirements is critical, but security testing looks beyond intended behaviors set out in requirements. Behind those may be a lot of unspecified assumptions and constraints. For example, if the input or output is sensitive (say, a credit card number), storing it even temporarily in an insecure place is a bad idea. Developers may overlook this and make design and implementation choices that leave the software at risk. To instill security quality into a project, testers use fuzzing and similar techniques to look beyond stated requirements and get answers to questions like:
- What isn't the system supposed to do?
- How should inputs, functionality, and data be restricted?
- Are security features correct and is functional code secure?
Vulnerabilities can be introduced at any stage of development. A key security requirement might be poorly defined or missing. An architectural weakness could get introduced during design. Developers may use a vulnerable function to process user input. Security testing tools can check for these vulnerabilities, but there are limitations.
Over the past few years, several classes of security testing tools have emerged. Essentially, these tools, some of which incorporate fuzz testing, have taken over the security aspects of testing. While they tend to find certain classes of vulnerabilities, they miss many others. They aren't substitutes for security testing, but instead, like a pair of pliers, they extend your reach when testing software. Bottom line, it's people thinking creatively about how to misuse or abuse software who drive security testing. For example, a source-code scanner or security-aware compiler can look for functions and constructs that commonly result in vulnerabilities. They can alert you that the code may have a weakness even when the syntax is essentially correct. While these tools can be useful during development and testing, they see only part of the picture.
Most applications are highly interconnected and have lots of interaction with other software. Statically scanning the code of one application or module doesn't give a complete view of how a running application will respond to hostile input. Some dynamic security tools exist to help bridge the gap. Application scanners primarily focusing on Web apps create input strings that can reveal potential vulnerabilities; they also look for symptoms of failure. These tools can be good at finding low-hanging fruit and tend to focus on some of the more common vulnerabilities.
We were about 13 sodas into our discount scheme when we discovered another design choice that would make the soda machine flaw far worse -- when the machine ran out of sodas and a little red "Empty" light came on. Not willing to take a loss on the 40 cents we'd just invested, we pressed the "Coin Return" button. We assumed the machine would give us back the four Bahamian 10-cent pieces. However, pressing the button returned four U.S. quarters -- a 150% return on our investment! This was a definite upgrade over free sodas.
On its own, the coin-return design was likely a good one based on mechanics and convenience. The developer relied on the assumption that the rest of the system worked as intended, including that it correctly identified objects as quarters. The coin return design choice by itself wasn't a security vulnerability, but it severely increased the impact of the machine's existing problem, mixing up U.S. quarters and Bahamian 10-cent pieces. To have a good approach to systems security, you have to practice the principle of defense-in-depth, which forces design choices that create a safety net around potential problems. Problems like this speak to the need of a more holistic approach to security testing.
Software security quality should be woven throughout the development process. It begins in requirements and design, is propagated through development and testing, and continues into deployment and support. The good news is that there are lots of resources. Microsoft has made many of its processes and tools available from its Security Development Lifecycle, a process that provides security and privacy throughout the development process. Likewise, SAFECode is synthesizing and distributing security best practices from some of the world's largest software makers.
The various secure development methodologies have some common themes, with the need for education perhaps being the biggest. The term "education" isn't exactly right -- what's really needed is re-education so developers understand that security isn't a natural outcome of traditional qualityassurance processes. Building secure software takes focused work and a different mind-set. In a few years, if all goes well, perhaps we'll be beyond worrying about security quality because security will be woven into software in the same way that reliability is. But until that happens, I still have a few more sodas to buy.