Channels ▼
RSS

Tools

2010 IT Project Success Rates


In This Issue

  • 2010 IT Project Success Rates
  • Hot Links

2010 IT Project Success Rates

This month I address several age-old questions in the IT industry: How successful are IT projects in practice? How do development paradigms compare? and How does team size affect project success rates? To answer these questions I will share results from three recent surveys which I ran with the help of Dr. Dobb's. The short story is that we're doing a lot better than others would lead you to believe, but that we still have room for improvement.

Let's start with an overview of the three surveys:

  • The 2010 IT Project Success Survey. This survey ran from May 2010 to the end of June 2010 and got 203 responses from the Dr. Dobb's readership. The goal was to explore the success rates by paradigm of IT projects in a manner which reflects the strategy taken by the Standish Group's Chaos Report.
  • The July 2010 State of the IT Union Survey. I ran this survey during July 2010 and had 233 responses from the Dr. Dobb's readership. It explored IT project success rates by team size and paradigm.
  • The 2010 Agile Project Success Survey. This survey ran in April 2010 and received 108 responses from the agile community. The purpose was to provide a comparison for the first survey, albeit one which only focused on agile project teams.

The 2010 IT Project Success survey looked at IT project teams following four different development paradigms:

  • Ad-hoc. On ad-hoc software development projects the team does not follow a defined process.
  • Iterative. On an iterative software development project the team follows a process which is organized into periods that are often referred to as iterations or time boxes. On any given day of the project team members may be gathering requirements, doing design, writing code, testing, and so on. Rational Unified Process (RUP) is an example of an iterative software process.
  • Agile. On an agile software development project the team follows an iterative process which is also lightweight, highly collaborative, self-organizing, and quality focused. An example of an agile process is OpenUP, Scrum, and Extreme Programming (XP).
  • Traditional. On a traditional software development project the team follows a staged process where the requirements are first identified, then the architecture/design is defined, then the coding occurs, then testing, then deployment. Traditional processes are often referred to as "waterfall", "classical", or simply "serial" processes.

Our previous IT Project Success Surveys in 2007 and 2008 (there was no 2009 survey as the 2008 survey ran in December 2008) only considered whether projects where considered successes or failures. This year I took a different tack and looked at three categories -- successful, challenged, and failed -- in a manner similar to the Chaos Report. For this survey, a project is considered successful if a solution has been delivered and it met its success criteria within a range acceptable to your organization, challenged if a solution was delivered but the team did not fully meet all of the project's success criteria within acceptable ranges (e.g. the quality was fine, the project was pretty much on time, but ROI was too low), and a failure if the project team did not deliver a solution at all.

According to the 2010 IT Project Success Survey, our success rates are:

  • Ad-hoc projects: 49% are successful, 37% are challenged, and 14% are failures.
  • Iterative projects: 61% are successful, 28% are challenged, and 11% are failures.
  • Agile projects: 60% are successful, 28% are challenged, and 12% are failures.
  • Traditional projects: 47% are successful, 36% are challenged, and 17% are failures.

To calculate these rates I followed the same strategy for each paradigm. The survey asked respondents to estimate the percentage of successful, challenged, and failed projects via one question for each paradigm (for a total of twelve questions). For each question you could choose from 91-100%, 81-90%, … 0% or Don't Know. Don't know answers weren't included in the success rate calculations, which were calculated as weighted averages (e.g. 95% to represent the range 91-100%). Because the resulting raw averages didn't perfectly add to 100%, something that I couldn't programmatically enforce in the survey, I had to normalize the results. The normalized averages were calculated as percentages of the average of the total for each paradigm. For example, for Iterative, the raw averages were 67%, 31% and 12% for a total of 110%. These figures where then normalized to 61% (67/110), 28% (31/110) and 11% (12/110) respectively. If you want to verify my calculations, or calculate the success rates using another formula, you can download the source data from www.ambysoft.com/surveys/ free of charge.

These results are drastically different than what is published by the Standish Group in its Chaos Report. According to the Chaos Report, our average success rate is 32% (far below even ad-hoc projects), 44% of IT projects are challenged (far higher than any of the paradigms), and our failure rate is 24% (far higher yet again). Why the difference? The problem that the Chaos Report really doesn't investigate the success rates of IT projects in my opinion, what it does is explore whether IT project teams are delivering reasonably close to the original schedule, the original budget, and the initial specification with the implicit assumption that this is the way the IT project success is defined. This assumption is pretty much wrong. The 2010 IT Project Success Survey also explored how success is defined for IT projects, and found that only 10% of the respondents indicated that "on time, on budget, to specification" is the success criteria for their IT projects. More interestingly, none of the business stakeholders who responded to the survey indicated that's how they define IT project success. In short, by forcing a pre-defined definition of success on project teams, one which is applicable only in a small percentage of cases, the Chaos Report has negatively skewed our perceived success rates. In the surveys that I run I ask people to report success rates in terms of the success criteria for the given projects, different projects have different success criteria after all, and as a result get what I believe to be much more accurate estimates of IT project success rates. Does it really make sense that we only have a 32% success rate in this industry? I certainly don't think so.

This survey also investigated how we actually define IT project success. When it comes to time/schedule, 54% of respondents prefer to deliver on time according to the schedule and 44% prefer to deliver when the system is ready to be shipped. With respect to financial issues, 35% prefer to deliver within budget and 60% prefer to provide good return on investment (ROI). When it comes to functionality, 14% prefer to build the system to specification and 85% prefer to meet the actual needs of stakeholders. Finally, when it comes to quality, 40% prefer to deliver on time and on budget and 57% prefer to deliver high-quality, easy-to-maintain systems.

So how does the Dr. Dobb's community as a whole compare with the agile community? According to the 2010 Agile Project Success Survey, the agile community reports that 55% of agile projects are considered successful, 35% considered challenged, and 10% are considered failures. These figures are in line with what the Dr. Dobb's survey found, albeit a bit more conservative. Conservative agilists, there's a mind bender for you <smile>. The reason why agilists are more conservative when it comes to estimating their success rates might be because their success criteria are a bit more stringent:

  • Time/schedule: 62% of agilists prefer to deliver on time according to the schedule and 34% prefer to deliver when the system is ready to be shipped.
  • Financial: 28% of agilists prefer to deliver within budget and 60% prefer to provide good return on investment (ROI).
  • Functionality: 15% of agilists prefer to build the system to specification and 82% prefer to meet the actual needs of stakeholders.
  • Quality: 29% of agilists prefer to deliver on time and on budget and 66% prefer to deliver high-quality, easy-to-maintain systems.

The July 2010 State of the IT Union Survey, which explored the relationship between team size and project success rates by paradigm, considered only success and failure as possible project states for simplification purposes. It looked at the four paradigms and three different project sizes: Small teams of 10 or fewer people, medium size teams of 11 to 25 people, and large teams of 26 or more people). Granted, your organization may define small, medium, and large differently but based on results of team size distribution in previous surveys (in particular the November 2009 State of the IT Union Survey which explored scaling issues) I thought that this was a fair way to categorize team sizes. With the four paradigms and three time sizes I had 12 questions, but had I considered the three different success states (successful, challenged, and failed) I would have had 36 questions -- few people are willing to respond to a survey that large. As with the 2010 IT Project Success Survey, for each question you could choose from 91-100%, 81-90%, … 0% or Don't Know. Don't know answers weren't included in the success rate calculations, which were calculated as weighted averages (91-100% => 95%). The resulting success rates for each development paradigm, by team size, were:

  • Ad-hoc projects: 74% for small teams, 58% for medium-sized teams, and 40% for large teams.
  • Iterative projects: 80% for small teams, 68% for medium-sized teams, and 55% for large teams.
  • Agile projects: 83% for small teams, 70% for medium-sized teams, and 55% for large teams.
  • Traditional projects: : 69% for small teams, 61% for medium-sized teams, and 50% for large teams.

I have a few interesting observations based on the results of this survey to share with you. First, the more disciplined approaches of agile and iterative were more effective for all team sizes than the less disciplined approaches of either traditional/classical or ad-hoc. Second, as with all previous IT project success surveys, the agile and iterative paradigms had statistically the same success rates. More on this in a future newsletter. Third, for small teams it's better to take an ad-hoc approach than a traditional approach, possibly because the process gets in the way. This may provide some insight into why many agilists are against defined processes – the majority of IT project teams are small, regardless of paradigm, so any previous experience with traditional and ad-hoc approaches would lead many people to believe that detailed processes are problematic if most of their project experiences were on small teams. Fourth, for large teams traditional strategies appear to fare much better than ad-hoc strategies (although not as good as either agile or iterative strategies). This may explain why some people still cling to traditional strategies, because if their experience is primarily on large projects and if they perceive agile or iterative strategies to be ad-hoc, then based on their experiences it's reasonable that they would prefer to stick to their existing strategies. Fifth, none of the paradigms are all that effective for large project teams, a clear indicator that your organization should strive to avoid large IT projects if possible.

I would be remiss if I didn't discuss some of the known challenges with surveys. First and foremost, I will only get responses from people willing to be surveyed. Second, surveys run the risk of getting responses from people with strong feelings about the topic, particularly when the title indicates the topic. For example, the IT success rate survey ran the risk of capturing biases of people interesting in this topic, whereas the July 2010 State of the IT Union Survey didn't reveal the topic being explored and possibly had less bias. Third, there's the danger of capturing opinions, not facts. This is clearly an issue with this subject matter because I asked people what they believed their average project success rates are. I had to ask this way because few organizations actually capture these sorts of figures in a coherent manner, and fewer still share the data internally let alone externally. However, the fact that the Dr. Dobb's readership is fairly senior leads me to believe that they are more likely to have a reasonable understanding of the success rates. Furthermore, because I'm asking the same questions to the same group of people in the same way for the various development paradigms also leads me to believe that at least the figures between the paradigms are comparable.

The primary reason why I run these surveys is to try to determine what is actually going on in the IT industry and what actually works in practice. We're inundated on a daily basis with marketing rhetoric from both consultants and product vendors, from unproven theories on what should work, and worse yet with religious dogma founded on unproven even disproved theories from yesteryear. I believe that the only way we'll ever address this challenge is with hard data, minimally from surveys such as mine but better yet from ethnographic research (which is orders of magnitude more difficult to do). So, I'd like to end this newsletter with a plea to you that if you ever receive a request from someone (particularly from me or Dr. Dobb's) to fill out a survey that you please invest a few minutes of your valuable time to do so. Together we can discover what actually works in practice.


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 

Video