Channels ▼
RSS

Mobile

Software Estimation: How Misperceptions Mean We Almost Always Get It Wrong


Software developers are among the smartest people on the planet and often boast advanced degrees in mathematics, engineering, or computer science. In some ways, they are like superheroes — capable of programming complex functions, juggling myriad technologies, morphing customer ideas into working software, all the while not breaking a sweat. So how is it that despite such technical savvy and programming prowess, they are so woefully poor at project estimation? Study upon study cites that less than one-third of projects are delivered on time or on budget. Couple this with the fact that close to half of the effort spent doing software projects ends up being "rework" and the whole situation seems to defy logic. How can smart people produce dumb estimates?

When we look across a company to departments, such as finance, that meet or exceed tax deadlines, or manufacturing teams that adhere to tight production schedules, it seems appropriate to conclude that software development should follow suit.

The problem of software project estimation is not straightforward, however. A big part of the problem with software development is doing "estimates" for products (software) that have yet to be designed. I believe that there are five top misperceptions about software estimating:

  1. This estimate is money in the bank. An estimate is not a guarantee, but rather a best guess of how big, how much, and how long something will take to develop or deliver, given the data that is available at the time.
  2. Everything's coming up roses. Software estimates often assume overly optimistic conditions and don't take interruptions, requirement changes, and a host of other factors into consideration.
  3. Things will be different this time. Historical project totals (effort, duration, costs) are facts and are often a good indicator of future project timelines and costs.
  4. This project is a piece of cake. Projects are seldom smaller and easier than we anticipate. Although we pride ourselves on being good communicators and project managers, projects tend to grow (the rule of thumb is 1.5% per month of the project), and assumptions about requirements (typically not documented) are a major contributor to this.
  5. The team will do it perfectly the first time. Although research on completed projects shows that rework figures hover between 40-60%, this task is often not included in project estimates.

Let's examine each of these misperceptions separately.

This estimate is money in the bank. An estimate is not a guarantee, but rather a best guess of how big, how much and how long something will take to develop or deliver given the data that is available at the time.

Indeed, most estimates are really "guesstimates" based on incomplete data. Imagine going to a builder without a solid floor plan and saying, "I want a custom house with enough bedrooms for the kids I will have in three years (could be one to five bedrooms), and a big kitchen and living area because we really like to entertain. I'm really not sure if we want to live in Minneapolis or Tampa right now, so just factor that into the estimate too…because we're going to the bank this month to get a mortgage to cover the construction." This would be ludicrous in home building, but it's analogous to what we often deal with in software development. While construction estimates are based on floor plans and even sealed blueprints, software estimates are often based on little more than "napkin scrawls" and ideas formulated over a lunch hour. Yet, these software estimates often become the project budget — no wonder projects end up over-budget when the basis of the estimate is so amorphous and incomplete.

But there is a solution! A historical project database, such as those provided by estimation software, minimizes uncertainties by providing analogies and similar projects on which to base new estimates. Some of the estimation tools are supported by databases containing actual data on thousands of completed projects together with solid parametric modeling equations. While an estimate is purely a best guess of how big it will be (software product size), how much it will cost (based on assumptions of effort, costs, complexity, size, etc.), and how long it should take to do (again based on how big, how easy, and how well the project will go) — when you augment your estimating toolset with a solid estimating tool, the risk of overruns is reduced markedly.

With estimating software, it's possible to reduce the variability and uncertainty that usually goes along with estimates done prematurely (that is, without good, solid software requirements).

Everything's coming up roses. Software estimates often assume overly optimistic conditions and don't take interruptions, requirement changes, and a host of other factors into consideration.

Software engineers and other technical professionals often overestimate their ability to create, deliver, and do things right the first time — even in the face of historical evidence to the contrary. There are two aspects to this situation:

  • Even experienced project teams can get caught up in thinking "this is how much effort the project should take," rather than how much time/effort it really takes; and
  • When a team does come up with a realistic estimate based on actual history, management can become incredulous and will reduce the estimate to a level they can live with.

As a result, project estimates end up being overly optimistic and often don't take into account real-life conditions.

Things will be different this time. Historical project totals (effort, duration, costs) are facts and are often a good indicator of future project timelines and costs.

This misperception is tied to the previous one; it's often a result of project teams being punished in the past for missing their deadlines (which were unrealistic to begin with) and spending too much. History is glossed over with niceties such as "well, we hadn't predicted that the hardware would be delivered so late" or "we didn't count on having to change project managers part way through." The fact is that history does repeat itself if nothing is done to change it.

You may have heard the maxim, "Insanity is doing the same thing over and over and expecting different results," yet we go into a project with the same team members, same skills, same tools, same users, and the same software development processes — and we expect different results.

Historical data formulate a much better gauge upon which to build estimates than theoretical models. History is facts; theoretical models are wishes. We should learn from our history and use it or we are apt to make the same mistakes again.

This project is a piece of cake. Projects are seldom smaller and easier than we anticipate. Although we pride ourselves on being good communicators and project managers, projects tend to grow (the rule of thumb is 1.5% per month of the project), and assumptions about requirements (typically not documented) are a major contributor to this.

This misperception can be tied to enthusiasm and the optimism of tackling a new project. It's a part of team pride to look at a project as a problem to be solved and envision an easy (and doable) solution. In addition, project teams are rewarded by management when they have a "can do" attitude toward solving problems. Imagine how few project managers would keep their jobs if they said, "Wow, this project is going to be a huge challenge and take a ton of brainpower," instead of the usual, "Yes, we can definitely do this." Yet the former statement is more closely aligned to reality — projects seldom end up being easier or smaller than anticipated. Once the design gets going, there are always complexities and missing requirements that come up. Projects are rarely a piece of cake.

The team will do it perfectly the first time. Although research on completed projects shows that rework figures hover between 40–60%, this task is often not included in project estimates.

Project teams pride themselves on doing the best work they can given their knowledge, skills, and capabilities, yet the tasks at hand are hardly straightforward. There are always missing pieces where requirements were not quite right (the users didn't put all the fields on the report layouts), the business changes (this is a given), designs have to be updated (data was not available as anticipated), mistakes are made (we're dealing with humans here), and new things come up (additional functionality needed.) While teams do their best to do the design, coding, testing, and other phases of a project correctly the first time, it is statistically proven that the effort to actually do a project right before releasing the product involves approximately 50% rework.

New approaches to software development can reduce the amount of rework by building in more reviews earlier in the project timeline and releasing small iterative parts of the software (Agile), but the very nature of human beings assures that mistakes will be made and communication will be less than perfect.

In sum, software projects can and are estimated well when the object of estimation (the software product) is well defined, but misperceptions abound about estimates that are really "guesstimates." The first step to changing misperceptions is to recognize them in your own organization and anchor your estimates more tightly to reality.


Carol Dekkers is a senior consultant and instructor for QSM Inc. She has been a member of the U.S. delegation to ISO software engineering standards since 1994 and is the author of several books on software metrics and quality.


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 

Video