One of the most well-known experts in software engineering, Capers Jones, has built up an extensive database of metrics covering more than 20,000 projects, many of them quite large. Armed with this data, he has frequently written about what activities and approaches work in practice and how much lift, if any, they actually provide and what their costs are. In this guest editorial, he informally looks at how some popular "laws" of programming and business hold up when compared with reality of software development. Ed.
Boehm's Second Law
Prototyping significantly reduces requirements and design errors, especially for user errors.
This law, formulated by Barry Boehm, is supported by the data. A caveat is that prototypes are about 10% of the size of the planned system. For an application of 1,000 function points, the prototype would be about 100 function points and easily built. For a massive application of 100,000 function points, the prototype itself would be a large system of 10,000 function points. This leads to the conclusion that large systems are best done using incremental development if possible.
Adding people to a late software project makes it later.
This law from Fred Brooks is one of the most famous in computing. It's supported to a certain degree by the evidence. Complexity of communication channels goes up with application size and team size. The larger the application, the more difficult it is to recover from schedule delays by any means. For small projects with less than five team members, adding one more experienced person will not stretch the schedule, but adding a novice will. For large applications with more than 100 team members, these projects almost always run late due to poor quality control and poor change control. Adding people tends to slow things down as a result of training and complex communication channels.
Any piece of software reflects the organizational structure that produced it.
Data tends to support this law. An additional caveat is that the size of each software component will be designed to match the team size that is assigned to work on it. Since many teams contain eight people, this means that even very large systems might be decomposed into components assigned to eight-person departments, which may not be optimal for the overall architecture of the application.
Cunningham's Law of Technical Debt
Shortcuts and carelessness during development to save money or time lead to downstream expenses called "technical debt" that may exceed the upstream savings.
Empirical data supports the basic concept that early shortcuts lead to expensive downstream repairs. Ward Cunningham's technical debt concept is a great metaphor, but not such a great metric. Technical debt omits projects canceled due to poor quality. Since about 35% of large systems are never finished, this is a serious omission. These failing projects have huge costs, but zero technical debt because they are never delivered. Technical debt also omits the costs of litigation and damage payments for poor quality. I worked as an expert witness in a lawsuit for poor quality control where the damage award to the plaintiff was more than 1,000 times larger than the technical debt to fix the bug itself.
Once a software project starts, the schedule until it is completed is a constant.
Empirical data supports this law by Douglas Hartree for average or inept projects that are poorly planned. For projects that use early risk analysis and have top teams combined with effective methods, this law is not valid. In other words, the law applies to about 90% of projects, but not to the top 10%.
Jones's Law of Programming Language Utility #3
In every decade, less than 10% of the available programming languages are used to code over 90% of all software applications created during that decade.
I originally formulated this rule. It's supported by a stream of empirical data between the years 1965 and 2014. The popularity of programming languages is very transient and seems to average less than a 3.5-year period from a burst of initial popularity until the language starts to fade away. Nobody knows why this phenomenon occurs. Some languages, such as Objective C used by Apple, have persistent use over many years. Why programming languages come and go is not fully understood, nor is language persistence.
Jones's Law of Software Defect Removal
Projects that are above 95% in defect removal efficiency are faster and cheaper than projects that are below 90% in defect removal efficiency.
Empirical data from about 20,000 projects supports this law. Projects that use only testing are normally below 90% in defect removal efficiency and usually run late due to stretching out the test interval. Similar projects that use inspections and static analysis before testing can top 99% in defect removal efficiency and also are usually on time or early, assuming rational schedule planning in the first place. Poor defect removal efficiency is the main reason for schedule slippage.
Lehman/Belady Laws of Software Evolution
Software must be continuously updated or it becomes less and less useful.
Software entropy or complexity increases over time.
These laws by Dr. Meir Lehman and Dr. Laszlo Belady of IBM were derived from long-range study of the company's OS/360 operating system. They have been independently confirmed by me. The first law is intuitively obvious, but the second law is not. The continuous modification of software to fix bugs and make small enhancements tends to increase cyclomatic complexity over time, and hence, increase the entropy or disorder of the software. In turn, this slows maintenance work and may require additional maintenance personnel unless replacement or restructuring occur. Software renovation and restructuring can reverse entropy.
Faster is slower.
Peter Senge noted for business in general that attempts to speed up delivery of a project often made it slower. This phenomenon is true for software. Common mistakes made when trying to speed up projects include omitting inspections and truncating testing. These tend to stretch out software development, not shorten it. Hasty collection and review of requirements, jumping into coding prior to design, and ignoring serious problems are all topics that backfire and make projects slower. To optimize software development speed, quality control including inspections and static analysis prior to testing is valuable.
Software performance gets slower faster than hardware speed gets faster.
This law was formulated by Niklaus Wirth during the days of mainframes and seemed to work for them. However, for networked microprocessors and parallel computing, the law no longer seems to hold.
Programming productivity doubles every six years.
My data shows that programming productivity resembles a drunkard's walk, in part because application sizes keep getting larger. However, if you strip out requirements and design and concentrate only on pure coding tasks, then the law is probably close to being accurate. Certainly, modern languages such as Java, Ruby, Go, C#, and the like have better coding performance than older languages such as assembly and C. There is a caveat, however. Actual coding speed is not the main factor. The main factor is that modern languages require less unique code for a given application due, in part, to more reusable features.
Capers Jones has written more than a dozen books on software quality, best practices, estimation, and measurement. His past contributions to Dr. Dobb's include Chronic Requirements Problems and Get Software Quality Right.