Software security is not security software. Attaining software security means applying a number of lightweight best practices throughout the software lifecycle more than it means the application of sundry security features like cryptography. Such best practices (or touchpoints) allow software professionals to build the emergent property of security into software.
A security problem is more likely to arise because of a problem in a standard-issue part of your system (say, the interface to the database module) than in some given security feature. This is an important reason why software security must be part of a full lifecycle approach.
Software security touchpoints are based on good software engineering and involve explicitly pondering security throughout the software lifecycle. This means knowing and understanding common risks (including language-based implementation bugs and architectural flaws), designing for security and subjecting all software artifacts to thorough, objective risk analyses and testing. "Software Security Touchpoints" specifies one set of touchpoints and shows how software practitioners can apply them to the various software artifacts produced during software development. This means understanding how to work security engineering into requirements, architecture, design, coding, testing, validation, measurement and maintenance.
Seven Terrific Touchpoints
Software Security Touchpoints
These lightweight best practices are applied to various software artifacts. The practices are numbered according to effectiveness and importance. Note that by referring only to software artifacts, we can avoid religious battles over any particular process.
1. All software projects produce at least one artifact: source code. At the code level, the focus is on implementation bugs, especially those that static analysis tools that scan source code for common vulnerabilities can discover. Code review is a necessary practice, but not sufficient for achieving secure software. Security bugs (especially in C and C++) are a real problem, but architectural flaws wreak just as much havoc.Just as you can't test quality into software, you can't bolt security features onto code and expect it to become hack-proof. Security must be built in throughout the application development lifecycle.
2. At the design and architecture level, a system must be coherent and present a unified security front. Designers, architects and analysts should clearly document assumptions and identify possible attacks. At both the specifications-based architecture stage and at the class-hierarchy design stage, risk analysis is a necessity. At this point, security analysts uncover and rank architectural flaws so that mitigation can begin. Disregarding risk analysis at this level leads to costly problems down the road. Note that risks crop up during all stages of the software lifecycle, so a constant risk analysis thread, with recurring risk tracking and monitoring activities, is highly recommended.
3. Penetration testing is also useful, especially if an architectural risk analysis is driving the tests. It provides a good understanding of fielded software in its real environment, but any such testing that doesn't take the software architecture into account probably won't uncover anything interesting about software risk. Software that fails during the kind of canned black-box testing practiced by prefab application security testing tools is truly bad. Thus, passing a low-octane penetration test reveals little about your actual security posture, but failing a canned penetration test indicates that you're in very deep trouble indeed.
4. Security testing must encompass two strategies: testing security functionality with standard functional testing techniques, and risk-based security testing based on attack patterns. A good security test plan does both. Security problems aren't always apparent, even when you probe a system directly, so standard-issue quality assurance is unlikely to uncover all critical security issues.
5. Building abuse cases is a great way to get into the mind of the attacker. Similar to use cases, abuse cases describe the system's behavior under attack; building abuse cases requires explicit coverage of what should be protected, from whom, and for how long.
6. Security must reside explicitly at the requirements level. Good security requirements cover both overt functional security (say, the use of applied cryptography) and emergent characteristics (best captured by abuse cases and attack patterns).
7. Battle-scarred operations personnel carefully monitor fielded systems during use for security attacks. Attacks do occur, regardless of the strength of design and implementation, so monitoring software behavior is an essential defensive technique. Knowledge gained by understanding attacks and exploits should be cycled back into software development.
Bonus touchpoint. External analysis (outside the design team) is often a necessity when it comes to security. Software security touchpoints are best applied by people not involved in the original design and implementation of the system.
Unfortunately, software architects, developers and testers remain largely unaware of the software security problem. The good news? Technologists and commercial vendors all acknowledge that the software security problem exists. The bad news? We have barely begun to apply solutions. But if you apply the seven terrific touchpoints outlined here, you'll be making a solid start toward secure software.
Gary McGraw is CTO of Cigital and a world authority on software security. His real-world experience is grounded in years of consulting with major software producers. He serves on the technical advisory boards of Authentica, Counterpane and Fortify Software. Dr. McGraw is coauthor of Exploiting Software (Addison-Wesley, 2004), Building Secure Software (Addison-Wesley, 2001), Java Security (John Wiley & Sons, 1996) and four other books. Contact him at firstname.lastname@example.org.