When sheet metal is formed into a car body, a massive stamping machine presses the metal into shape. A huge metal tool called a die makes contact with the sheet metal, pressing it into the shape of a fender, door or hood. Designing and cutting these dies to the proper shape accounts for half of the capital investment of a new car development program, and drives the critical path. If a mistake ruins a die, the entire program suffers a huge setback. So if there's one thing that automakers want to do right, it's die design and cutting.
The problem? As car development progresses, engineers keep making changes that find their way to the die design. No matter how hard the engineers try to freeze the design, they're not able to do so. In Detroit in the 1980s, the cost of design changes was 30-50 percent of the total die cost, while in Japan it was 10-20 percent. These numbers suggest that the Japanese companies must have been much better at preventing change after the die specs were released to the tool and die shop, but such was not the case.
The U.S. strategy for die making was to wait until the design specifications were frozen and then send the final design to the tool and die maker, which triggered the process of ordering the block of steel and cutting it. Any changes went through an arduous approval process—it took about two years from ordering the steel to the time that the die was used in production. In Japan, however, the tool and die makers ordered the steel blocks and started rough cutting at the same time the car design began. This is called concurrent development. How can it possibly work?
Japanese die engineers are expected to know a lot about what a die for a front door panel, for example, involves, and are in constant communication with the body engineer. They anticipate the final solution and are skilled in techniques to make minor changes late in development, such as leaving more material where changes are probable. Most of the time, die engineers are able to accommodate the engineering design as it evolves. In the rare case of a mistake, a new die can be cut much faster because the whole process is streamlined.
Japanese automakers don't freeze design points until late in the development process, allowing most changes to occur while the window for change is still open. In contrast to the early design freeze practices in the U.S. in the 1980s, Japanese die makers spent perhaps a third as much money on changes—and produced better die designs. Japanese dies generally required fewer stamping cycles per part, creating significant production savings.
This significant difference in time-to-market and the increasing market success of Japanese automakers prompted U.S. automotive companies to adopt concurrent development practices in the 1990s, and today, the product development performance gap has narrowed significantly.
Devil in the Details
Programming is a lot like die cutting. The stakes are often high, and mistakes can be costly, so sequential development (establishing requirements before designing and programming begin) is a common way to protect against serious errors. Sequential development forces designers to take a depth-first rather than breadth-first approach, making low-level dependent decisions before experiencing the consequences of high-level decisions. The most costly mistakes are made by forgetting to consider something important at the beginning, and the easiest way to make big mistakes is to drill down to detail too quickly. Once you start down the detailed path, you can't back up, and aren't likely to realize that you should. Instead, it's wiser to first survey the landscape and delay the detailed decisions.
Concurrent development of software encourages this breadth-first approach, enabling discovery of those big, costly problems before it's too late. Usually taking an iterative form, concurrent development is the preferred approach when stakes are high and understanding of the problem is evolving. Moving from sequential to concurrent development means programming the highest-value features as soon as a high-level design is determined, even while detailed requirements are still being investigated. This may sound counterintuitive, but consider it an exploratory approach that permits you to learn by trying a variety of options before you lock in on a direction that constrains implementation of less important features.
In addition to providing insurance against costly mistakes, concurrent development is the best way to deal with changing requirements, because both small and large decisions are deferred while you consider all the options. Because of this flexibility, when change is inevitable, concurrent development reduces delivery time and overall cost while improving the performance of the final product.
This may sound like magic—or hacking—and, indeed, merely programming earlier, without associated expertise and collaboration, is unlikely to lead to improved results. Vital critical skills must be in place for concurrent development to be truly effective.
In traditional, sequential development, U.S. automakers considered die engineers to be quite remote from automotive engineers. Similarly, programmers in a sequential development process often have little contact with the customers and users who have requirements and the analysts who collect requirements. Concurrent development in die cutting required U.S. automakers to make critical changes: The die engineer had to anticipate what the emerging design would need in the cut steel, and thus had to collaborate closely with the body engineer.
Similarly, concurrent software development requires developers with enough expertise in the domain to anticipate where the emerging design is likely to lead, and close collaboration with the customers and analysts to design the system.
The Last Responsible Moment
Concurrent software development means starting developing when only partial requirements are known and developing in short iterations that provide feedback that causes the system to emerge. Concurrent development makes it possible to delay commitment until the last responsible moment, a term coined by the Lean Construction Institute: that moment at which failure to make a decision eliminates an important alternative. If commitments are delayed beyond the last responsible moment, decisions are made by default—generally not an effective method.
Making decisions at the last responsible moment isn't procrastination; in fact, delaying decisions is hard work, requiring special tactics:
Share partially complete design information. The notion that a design must be complete before it's released is the biggest enemy of concurrent development, increasing the length of the feedback loop in the design process, causing irreversible decisions to be made far sooner than necessary. Good design is a discovery process, best accomplished with short, repeated exploratory cycles.
Organize for direct, worker-to-worker collaboration. Early release of incomplete information and refining the design as development proceeds require collaboration: The upstream workers who understand system details must communicate directly with downstream workers who understand code details.
Develop a sense of what's critically important in the domain. Forgetting some critical feature of the system until too late is the fear that drives sequential development. If security, response time or fail-safe operation are critically important in the domain, these issues should be considered from the start; if they're ignored until too late, it will indeed be costly. However, don't swallow the assumption that sequential development is the best way to discover these critical features. In practice, early commitments are more likely to overlook such critical elements than late commitments, because early commitments rapidly narrow the field of view.
Develop a sense of when decisions must be made. If you make decisions by default, you haven't truly delayed them. Certain architectural concepts such as usability design, layering and component packaging are best made early, to facilitate emergence in the rest of the design. Late commitment must not degenerate into no commitment. You need to develop a keen sense of timing that kicks in your decision-making mechanism at the appropriate moment.
Develop a quick response capability. The slower you respond, the earlier you must make decisions. Dell, for instance, can assemble computers in less than a week, so they can decide what to make less than a week before shipping. Most other computer manufacturers take a lot longer to assemble computers, so they must decide what to make much sooner. If you can change your software quickly, you can wait to make a change until customers know what they want.
Software systems differ from most products in that they're upgraded on a regular basis. More than half of the development work that occurs on a software system takes place after it's first sold or placed into production. In addition to internal changes, software systems are subject to a changing environment—say, a new operating system, a change in the underlying database, a change in the client used by the GUI, a new application using the same database and so on. Most software is expected to change regularly over its lifetime, and in fact, once upgrades are stopped, software is often nearing the end of its useful life. This presents us with a new category of waste; that is, waste caused by software that is difficult to change.
In 1987, Barry Boehm wrote, "Finding and fixing a software problem after delivery costs 100 times more than finding and fixing the problem in early design phases." This observation became the rationale behind thorough up-front requirements analysis and design, even though Boehm himself encouraged an incremental approach over "single-shot, full product development." In 2001, Boehm noted that for small systems, the escalation factor is more like 5:1 than 100:1; and even in large systems, good architectural practices can significantly reduce the cost of change by confining features that are likely to change to small, well-encapsulated areas.
Product development previously reflected a similar, but more dramatic, cost escalation factor. It was once estimated that a change after production began could cost 1,000 times more than if the change had been made in the original design. This belief that the cost of change escalates as development proceeds contributed greatly to standardizing the American sequential development process. No one seemed to recognize that the sequential process could actually be causing the high escalation ratio. However, in the 1990s, as concurrent development replaced sequential development in the U.S., the cost-escalation discussion was forever altered. The discussion was no longer about how much a change might cost later in development; instead, it centered on how to reduce the need for change through concurrent engineering.
Not all change is equal: You need to get a few basic architectural decisions—such as choice of language, architectural layering decisions and the selection of interacting with an existing database—right at the onset of development, because they establish constraints throughout a system's lifespan. These kinds of decisions might have the 100:1 cost-escalation ratio, and because they're so crucial, you should take a breadth-first approach and try to minimize the number of these high-stakes constraints.
The bulk of change in a system need not have a high cost-escalation factor; as you move through development, the sequential approach escalates the cost of most changes exponentially. Sequential development emphasizes making all decisions as early as possible, so the cost of all changes is the same—very high. Concurrent design defers decisions as late as possible. This has four effects:
- Reduces the number of high-stakes constraints.
- Gives a breadth-first approach to high-stakes decisions, making correct decisions more likely to occur.
- Defers the bulk of the decisions, significantly reducing the need for change.
- Dramatically decreases the cost-escalation factor for most changes.
Two Cost Curves
Let's return for a moment to the Toyota die-cutting example. The die engineer sees the conceptual design of the car and knows roughly the size of door panel necessary. With that information, a large enough steel block can be ordered. If the concept changes from a small, sporty car to a mid-size family sedan, the block of steel may be too small—a costly mistake. But the die engineer knows that once the overall concept is approved, it won't change, so the steel can be safely ordered long before the details of the door emerge. Concurrent design is a robust design process because the die adapts to whatever design emerges.
Lean software development delays freezing all design decisions as long as possible, because it's easier to change a decision that hasn't been made. Lean software development emphasizes developing a robust, change-tolerant design, one that accepts the inevitability of change and structures the system so that it can be readily adapted to the most likely kinds of changes.
Software changes throughout its lifecycle mainly because the business process in which it's used evolves over time. Some domains evolve faster than others, and some domains may be essentially stable. It's not possible to build in flexibility to accommodate arbitrary changes cheaply. The idea is to build tolerance for change into the system in the domain dimensions that are likely to change. Observing where changes occur during iterative development gives a good indication of the places the system will probably need flexibility in the future. The secret is to know enough about the domain to maintain flexibility, yet avoid excess complexity.
If you build a system with a focus on getting everything right at the beginning, it's likely to be brittle and not accept changes readily. Worse, the chance of making a major mistake in key structural decisions is increased with this depth-first approach.
If, on the other hand, you allow the system's design to emerge through iterations, it will be robust, adapting more readily to changes that occur during development. More importantly, the ability to adapt will be built into the system, so that as more changes occur after its release, they can be readily incorporated. Applying the flexibility and delayed decision-making of the Japanese auto production methodology to your development project, you can increase its chances of success.
Amateurs Strive; Experts Hide
In his essay "Delaying Commitment" (IEEE Software, May/June 1988), leading British computer scientist and professor Harold Thimbleby observes that the difference between amateurs and experts is that experts know how to delay commitments and conceal their errors for as long as possible, repairing flaws before they cause problems. Amateurs try to get everything right the first time, so overloading their problem-solving capacity that they end up committing early to wrong decisions. Thimbleby recommends some tactics for delaying commitment in software development, in a virtual endorsement of object-oriented design and component-based development:
Mary Poppendieck's 25 years' experience in IT includes supply chain management, manufacturing systems and digital media. As information systems manager in a videotape manufacturing plant, Poppendieck first encountered the Toyota Production System, which later became known as Lean Production. She implemented one of the first Just-in-Time systems at 3M, resulting in dramatic improvements in the plant's performance. This article is adapted from Lean Software Development (Addison-Wesley, in press) with permission.