Channels ▼
RSS

Design

The Non-Existent Software Crisis: Debunking the Chaos Report


The software development success rate published in the Standish Group's Chaos Report is among the most commonly cited research in the IT industry. Since 1995, the Standish Group has reported rather abysmal statistics — from a rate of roughly one-in-six projects succeeding in 1995 to roughly one-in-three projects today. Ever since the Chaos Report was published, various Chicken Littles have run around warning about how this "software crisis" will lead to imminent disaster. However, this supposed "software crisis" is complete and utter hogwash, it always has been and I suspect always will be. Sadly, that doesn't stop people who should know better, or at least should be able to think for themselves, to continue quoting this nonsense.

Since 2006, I have organized an almost-annual survey for Dr. Dobb's that explores the actual success rates of software development projects. The most recent was conducted in November and December of 2013 and had 173 respondents. The original questions as asked, the source data, and my analysis can be downloaded for free at 2013 IT Project Success Rates Survey results. The survey was announced on the Dr. Dobb's site, on the Ambysoft announcements list, my Twitter feed, and several LinkedIn discussion forums. The results of this study are much more positive than what the Standish Group claims. They still leave significant room for improvement, but they certainly don't point to a crisis.

The success rates by development paradigm are summarized in Table 1. As you can see, all paradigms (even an Ad hoc approach) fare much better than a one-in-three success rate. In this study, we asked people to judge success based on criteria that was actually applicable to the team at the time. In this case, a project is considered:

  • Successful if a solution has been delivered and it met its success criteria within a range acceptable to the organization;
  • Challenged if a solution was delivered, but the team did not fully meet all of the project's success criteria within acceptable ranges (for example, the quality was fine, the project was pretty much on time, but ROI was too low);
  • Failed if the team did not deliver a solution.

Paradigm

Successful

Challenged

Failed

Lean

72%

21%

7%

Agile

64%

30%

6%

Iterative

65%

28%

7%

Ad hoc

50%

35%

15%

Traditional

49%

32%

18%

Table 1: Software development success rates by paradigm.

Each paradigm was well-defined. Respondents were first asked if their organization had any project teams following a given paradigm, and then what percentage of projects where successful, challenged, or failed. A weighted average for each of level of success was calculated. A selection of 91-100% was considered to be 95%, 81-90% as 85%, and so on. A selection of 0 was considered as 0, and answers of "Don't Know" were not counted. I then had to normalize the value because the weighted averages didn't always add up to 100%. For example, the weighted averages may have been 60%, 30%, and 20% for a total of 110%. To normalize the values, I divided by the total, to report 55% (60/110), 27% (30/110), and 18% (20/110).

Defining the Paradigms

The paradigms in this survey were defined as follows:

  • Ad hoc. On an Ad hoc software development project, the team does not follow a defined process.
  • Iterative. On an iterative software development project, the team follows a process that is organized into periods often referred to as iterations or time boxes. On any given day of the project, team members may be gathering requirements, doing design, writing code, testing, and so on. An example of an iterative process is RUP. Agile projects, which are defined as iterative projects that are performed in a highly collaborative and lightweight manner, are addressed later.
  • Agile. On an Agile software development project, the team follows an iterative process that is also lightweight, highly collaborative, self-organizing, and quality focused. Examples of Agile processes include Scrum, XP, and Disciplined Agile Delivery (DAD).
  • Traditional. On a traditional software development project, the team follows a staged process where the requirements are first identified, then the architecture/design is defined, then the coding occurs, then testing, then deployment. Traditional processes are often referred to as "waterfall" or simply "serial" processes.
  • Lean. Lean is a label applied to a customer-value-focused mindset/philosophy. A Lean process continuously strives to optimize value to the end customer, while minimizing waste (which may be measured in terms of time, quality, and cost). Ultimately, the Lean journey is the development of a learning organization. Examples of Lean methods/processes include Kanban and Scrumban.

Ad hoc development (no defined process) and the Traditional approach had statistically the same levels of success in practice. Our previous studies have also shown this result. However, when you take team size into account, Ad hoc is superior to Traditional strategies for small teams, yet the opposite is true for large teams. Agile and Iterative strategies had similar results on average, which we've also found to be true in the past, regardless of team size. For the first time, Lean strategies for software development, supported by both Kanban and the DAD framework, were notably more successful than the other four paradigms we explored. Food for thought.

Time to Get Lean?

For each paradigm, we also looked at several potential success factors. The questions we asked were:

  • Time/Schedule. When it comes to time/schedule, what is your experience regarding the effectiveness of [paradigm] software development teams?
  • Budget/ROI. When it comes to effective use of return on investment (ROI), what is your experience regarding the effectiveness of [paradigm] software development teams?
  • Stakeholder Value. When it comes to ability to deliver a solution that meets the actual needs of its stakeholders, what is your experience regarding the effectiveness of [paradigm] software development teams?
  • Product Quality. When it comes to the quality of the system delivered, what is your experience regarding the effectiveness of [paradigm] software development teams?

For each success factor, respondents were given options of Very Effective, Effective, Neutral, Ineffective, Very Ineffective, and Not Applicable. For each question, we calculated a weighted average by multiplying the answers by 10, 5, 0, -5, and 10 respectively (answers of "Not Applicable" were ignored). By doing so, we're now able to compare the relative effectiveness of each paradigm by success factor, which you can see in Table 2. It's interesting to note that Lean approaches were perceived to provide better results on average than the other paradigms. Also, the three development paradigms of Lean, Agile, and Iterative were significantly better across all four success factors than either Ad hoc or Traditional (waterfall). In the vast majority of cases, only cultural inertia can justify a Traditional approach to software development these days.

Paradigm

Time/Schedule

Budget/ROI

Stakeholder Value

Product Quality

Lean

5.7

6.0

5.5

4.8

Agile

4.3

4.2

5.6

3.8

Iterative

4.9

4.2

5.6

3.8

Ad hoc

0.0

-0.4

1.9

-1.4

Traditional

-0.7

0.5

0.1

1.9

Table 2: The effectiveness of each software development paradigm.

How Is Success Defined?

The survey also explored how people define success, which I argue is exactly where the Chaos Report falls apart with respect to measuring project success rates.  Similar to what I found in the 2010 study, there was a robust range of opinions when it came to defining success:

  • Time/schedule: 16% prefer to deliver on time according to the schedule, 39% prefer to deliver when the system is ready to be shipped, and 42% say both are equally important
  • Budget/ROI: 13% prefer to deliver within budget, 60% prefer to provide good return on investment (ROI), and 23% say both are equally important
  • Stakeholder value: 4% prefer to build the system to specification, 86% prefer to meet the actual needs of stakeholders, and 10% say both are equally important
  • Product quality: 10% prefer to deliver on time and on budget; 56% prefer to deliver high-quality, easy-to-maintain systems; and 34% say both are equally important


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 

Comments:

ubm_techweb_disqus_sso_-891c1e741a13fbc847ad5e05718bf3fc
2014-08-22T10:22:37

No software crisis huh? The education system in South Africa is one of the lowest in the world. In order to cover up the education crisis, the government drops the pass rates. Now every one is passing, and no one in this eco system, the student, the teacher, the parent will tell you any different than a success story. This is human nature. In 1998 Thabo Mbeki handled the AIDS crisis, by simply saying that HIV did not cause AIDS. Hmmm wonder how that worked for them? Again this is human nature. Our strongest emotion is fear, and it is easier to deny a problem exists than to face it.

The average developer today, cannot even deal with the concept of a simple thread. I know I have interviewed more than 100 so called developers, that work on "successful" projects, and they are simply clueless in the most basic constructs of computer science. (This is normal today ... there is google)

But all this is forgiven in our industry as we lean on the proverbial agile crutch. We run an iteration, mess it up get feedback and, re re re re factor and more re re re factor. We call this progress and accept it as normal healthy agile. Developers copy and paste, code from google searchers trying to get it to "WORK" so they can deliver at the demo meeting. Nine out of ten, times a developer cannot even explain the code in there application, as they copied and pasted it from somewhere else ... justified by ... it works !!! ...

Call it what ever you like. My brain is not in neutral, when I am sick I go to the doctor. I do not Google it, or try some experimental drug and come back in 2 weeks to try something else. Sounds crazy? Apparently this is the definition of healthy Agile, Don't laugh this is our industry. We take on very expensive projects and with out plans and start laying bricks, only to knock it down next week. I wonder how much code is thrown away everyday ... But no ... No crisis.

I run an Agile shop, and am highly successful using volatility based decomposition. I do not build my systems based on requirements, but rather capture the essence of the problem domain, in a service orientated architecture. I design for the only constant which is change. I don't write code against user requirements. I rather orchestrate the required run-time behavior as the business developments of the day calls for it.

Yes this is harder, and requires more skill, but you simply cannot avoid the hard work up front, to get out anything of value. This is why we do not have perpetual energy sources and why alchemy fails.

I see every project that requires a massive cost of ownership, in terms of maintenance, flawed in security, and failing under load as a failure. By definition, that leaves a lot of broken junk in the world or at least a lot of applications that simply cannot scale out to the changing requirements of the business. And yes I am very sure they worked yesterday.

But do not worry. The very people who bring you this, says, that everything is under control. No crisis. We are AGILE.

Just because you have never seen a successful project, and a successful implementation does not mean it cannot be done, all it means is that you have never seen it before.


Permalink
ubm_techweb_disqus_sso_-bea75935c0d422b3d9dae4bd40a9edc3
2014-08-21T19:41:33

" we'll find out about this system as we go."

I pray to god you're never involved in any software that determines my life.


Permalink
ubm_techweb_disqus_sso_-bea75935c0d422b3d9dae4bd40a9edc3
2014-08-21T19:40:26

Software needs engineering the very same reason bridges need engineering. Just because it's hard doesn't mean we give up on engineering!

When software projects collapse they can just as directly lead to people dieing as well as a bridge collapsing. "well we're sorry the laparoscopic camera had a software bug and we perferated your colon, you'll just need to wear a colostomy bag".


Permalink
ubm_techweb_disqus_sso_-bea75935c0d422b3d9dae4bd40a9edc3
2014-08-21T19:33:45

"What it measures is whether project teams are reasonably on time, on budget, and are building something to specification."

This is the definition of success.

Anything other than this is..... not success. You can pick your words to describe this state, but there is absolutely one word you cannot use: success.

Success is delivering a project that delivers business value meeting the stakeholder's needs on time and on budget.

Anything else is not.

"How many organizations have you seen with a less than one third success rate? Any? I get around, at one point in my career I was visiting 20+ organizations a year, and I've never run into one with such an abysmal track record."

Because they lie. They lie to themselves. They don't want to admit they fail 66% of the time, so they move the goal posts so they can sleep better at night. They move their goal posts so the shareholders don't riot and toss out the board of executives at every shareholders meeting.


Permalink
ubm_techweb_disqus_sso_-0e6d99d7c4f109fc4c558592ada80977
2014-04-29T13:50:36

Hogwash indeed! Sorry Scott – if the organization chooses to measure success as something other than “on time, on budget, with expected functionality” – then one can easily see how there is no ‘crisis’. Such organizations have opted to accept something less than what they ought to be striving for.

Analogy – President Kennedy challenges the USA to put a man on the moon, and return him safely to earth, by the end of the ‘60’s. Drop the “return him safely” from the mission and then call getting the astronaut there but having him perish because we couldn’t get him back a success! Well, it meets the criteria, but is it really a ‘success”?

Another (real life) analogy – my wife asks me to clean the bathroom. By whose criteria should we measure success? If it is mine, then this will take about 10 minutes. If it is my wife’s criteria, there may be a second (and maybe third) iteration involved! Should we really accept the fox as the keeper of the hen house?


Permalink
ubm_techweb_disqus_sso_-e6443efbdfa933721ac2f761bb3e581e
2014-04-18T19:32:07

I agree with Tlingit. Ambler allows his respondents to essentially redefine success as "Time/Cost/Functionality - pick two out of three." What other industry allows success to be defined this way?


Permalink
ubm_techweb_disqus_sso_-11e3cde700f2e2528d992144fdc25b57
2014-02-14T11:21:14

Yep, that's the problem. ;-)

Seriously though, it's a lot of work to do deep research as you suggest. This work is difficult to fund and even more difficult to get support for within target organizations to study.

I often ask my clients if they would be willing to share their metrics publicly, yet few are willing to do so. This is a fairly easy thing for companies to do, particularly compared to starting up a new study with a researcher, but they are always worried about losing some sort of competitive advantage by doing so.


Permalink
ubm_techweb_disqus_sso_-a7807945662b5657189c78dec0fd8094
2014-02-11T18:26:35

I agree entirely. The "Software Crisis" has been touted as critical for much of the time I've been in software (which would be 41 years). The Standish Group's Chaos Report is routinely quoted as a source of this and a measure of the depth of the crisis. But how do they collect data?

Molokken and Jorgensen gave an excellent critique of the Standish Group's collection and analysis process (whatever that might be) as well as their conclusions [see http://citeseerx.ist.psu.edu/v...]

Having done this a lot over the years, collection of reasonably indicative data is very difficult. Projects which run badly rarely collect data on how badly they run. Projects which utterly "fail" do not, as a rule, stand up to be counted. Quite the opposite: usually they bury the dead bodies and pretend it never happened. So there is certainly some survivor bias in the numbers. But the real criterion for "success" is the delivery of value over cost, not whether a project made a particular target budget or date.

It is clear to anyone who has engaged in any large-scale systems development that being agile is much better than not being agile. It's also clear that the early commitment of large amounts of money when little is known about the system being built or what might be encountered while building it is really risky. This was true back in the sad bad days of waterfall projects and it's true today.

The best rejoinder to the criticism that software costs too much came from Tom DeMarco (in, well, "Why Does Software Cost So Much?") when he observed: "...compared to what?..." If the Software Crisis is really so bad and has continued so long, wouldn't the marketplace have replaced it with something more efficient?


Permalink
ubm_techweb_disqus_sso_-9275afb2fb019653617d1d4ae249bd71
2014-02-09T17:16:57

Isn't this a bit of a rehash of Robert L Glass's 2005 article in the IEEE Software magazine?

http://www.computer.org/csdl/m...

The Time, Scope, Budget measures of success are the 'Gold Standard' for project management and are, unfortunately, still held in high-esteem by many project managers.

There are literally dozens or research papers on the subject of software project success/failure and the need to have them agreed upfront. This is, of course, the basis of Tom Gilb's Evo method.

Collecting anecdotal evidence from arbitrary participants seems kind of random, no matter how you crunch the numbers. As many others have pointed out success is in the eye of the beholder.

Like Scott, I visit many different organisations every year. There only two reasons they call on me. Either they have (real or perceived) problems with delivery or they just want some training. Usually it's the former.

You can bet your bottom dollar that if the Project Managers call me in and tell me they haven't delivered a project successfully (if at all) in the last three years, the Engineers will tell me every project they do is successful.

Conversely, if the Engineers call me in and tell me there are project problems, the Project Managers will deny vehemently it.

It may be the case that I only see the problem organisations because that's the sort of work I do but my experience tells me there are lots of problems out there.

Maybe we're suffering from too much of this:
http://www.amazon.co.uk/Mistak...


Permalink
ubm_techweb_disqus_sso_-cf088f2bee34b08ba50571f37f310732
2014-02-08T05:37:59

If you think that there is no software crisis, explain why the NASA contractor tried to do a 1B U$ logistics software and nothing came out of it, or tell me why in IT help desks, there are no responses to basic problems, there no standard practices, tell me why hardware logic verification is so complicated and needs very smart people, and not kids trying to have fun making some script to test a program, tell me ways to know what happened in the unintended acceleration case against Toyota in the US inboard computer and nobody even NASA knows a **** about how it could happen......... If you can explain me all of that then your column is right.


Permalink
Tlingit
2014-02-06T17:53:00

I wish! In fact, I was kind of hoping you and/or DrDobbs.com would want to take that on. :-)


Permalink
robertvbinder
2014-02-06T17:40:46

Hi Scott

I've long suspected the Standish report and its claims, as the survey methodology is opaque, and of course, the hype is self-serving.

As far as I can tell, the participants in your studies are self-selected and you do not attempt or claim broad representativeness. So these results cannot be viewed as anything but a collection of profile case studies from persons who are inclined to tell their story. Typically, people are not inclined to self-report failures, so it seems likely to me that projects that failed (by any reasonable definition) are probably under-represented. Many, if not most, enterprise IT and ISV organizations stand on a large legacy codebase, typically the result of thousands of person-years of development using what you call "traditional paradigm." That would count as a success in your terms, but I suspect little of that is reflected in your survey responses.

I don't say this to diminish your results - they are interesting and instructive in their own right. I've done several such studies and have concluded there is no cheap way to achieve reliable representativeness, but it is important not to read too much into these snapshots.

NIST did a plausible macro-economic analysis about ten years ago, and concluded at that known-bad IT practices add about $50B of avoidable cost to the US economy annually. There are many other credible studies. I don't know of any that have found evidence of a "crisis."

Of course, there are plenty of horror stories. Here's a case study of how bad software development lead to the collapse of a successful ISV http://robertvbinder.com/how-a...

All this indirectly raises the question of success. I like Tom DeMarco's reflection on all of this: "Consistency and predictability are still desirable, but they haven’t ever been the most important things. For the past 40 years, for example, we've tortured ourselves over our inability to finish a software project on time and on budget. But as I hinted earlier, this never should have been the supreme goal. The more important goal is transformation, creating software that changes the world or that transforms a company or how it does business." IEEE Software, July/August 2009.


Permalink
ubm_techweb_disqus_sso_-11e3cde700f2e2528d992144fdc25b57
2014-02-06T12:49:18

I agree with what you're saying. Can you point to a study that has done those sorts of things that you write about?


Permalink
ubm_techweb_disqus_sso_-c1319b5c2a8b278ec6190de7d1d46918
2014-02-06T11:12:30

There are some things that are very useful from engineering that can be used in software development. I agree. But there are many things that are different about software development for which engineering is not suited. Maybe I can recommend reading some writings of David Parnas, one of the most disciplined of software developers ever.

Engineering is much more planned - you have to get it right before it goes into production. Software is by nature much more experimental - we'll find out about this system as we go.


Permalink
ubm_techweb_disqus_sso_-c1319b5c2a8b278ec6190de7d1d46918
2014-02-06T11:07:12

Well, I suggest you read Koch and Godden's book. They are management consultants. But actually, the function of management has not disappeared, it is just the need of a separate class to have to do it.


Permalink
Tlingit
2014-02-06T03:07:16

Thanks for responding, Scott. I guess I already did check my gut because, like I wrote before, "I don’t know that I totally believe the Chaos Report."

To me, if someone wanted to do a useful study of software success and failure, they would need to select specific projects that represented the diversity of projects that are being built... varying scopes, technologies, industries, countries, etc. Then they would need to survey three groups: those who built them, those who use them, and those who paid for them. Only then could they compare findings to try to understand what has been happening out there.

I bet Microsoft felt pretty good about Windows 8.0, and Apple felt good about iWork 2014. It wasn't until those products hit the street that they got their reality check.

Consequently, if someone is trying to come up with software industry success statistics by surveying software people -- whether it is Standish Group, you or the NSA -- then their approach is fundamentally flawed.

Software people declaring their software projects to be successful is like men declaring themselves to be great lovers. I'm glad they feel like they're getting positive feedback and all, but they aren't the group that actually knows.

Success isn't meeting a checklist of criteria, success is content, repeat customers.

I'm currently finishing a software project that was supposed to be finished February 2. It will be actually finished on February 7. It is slightly under budget and the client loves it so much they've been demoing the work in progress to their clients these last 2 weeks.

Was this a successful project then?

I say we are deceiving ourselves if we think we know that today simply because we met certain criteria.

Everyone thinks their babies are smart, beautiful and perfect when they are born... it's when you are sitting in the principal's office for the second time in a week that you start to get real clarity on the job you think you've done.


Permalink
AndrewBinstock
2014-02-06T00:47:27

"Whether or not you like the article, one thing is true. We need to get away from the notion of 'software crisis'."
Hear,hear!


Permalink
ubm_techweb_disqus_sso_-71d59a8f651a78ede65d1a78a7977a03
2014-02-06T00:46:29

Maybe it's me but I see no correlation between how hard something is to change, and the use of the word "engineering."

Of course software should be engineered -- any complex outcome should be engineered, whether it is computer hardware, computer software, a movie screenplay, or date night. Flexibility is helpful, but its not a replacement for engineering.


Permalink
ubm_techweb_disqus_sso_-11e3cde700f2e2528d992144fdc25b57
2014-02-06T00:41:52

Actually, the definition of challenged reflects what the Chaos Report uses for challenged.

The fundamental difference between the Chaos Report and my study is that the Chaos Report forces a definition of success on teams that is rarely applicable. My study levels the playing field by measuring success based on the criteria that was actually applicable for the given team.

Do a quick gut check though. How many organizations do you know with a one third success rate at software development projects? If one third is roughly correct, then roughly half of the organizations that you're familiar with should have less than a one third success rate and half would have more than that. So, is this a fair assessment of what you're able to actually observe yourself? Or, is a roughly two-thirds success rate a bit more realistic (based on your own observations)?


Permalink
ubm_techweb_disqus_sso_-71d59a8f651a78ede65d1a78a7977a03
2014-02-05T23:29:45

You had me until you got to "management is just a relic of the industrial revolution."

Actually, management seriously predates the industrial revolution, and is still necessary today --

Just not the way it is usually done today.

Have you heard of "radical transparency?" I know a software company that's building great stuff with an extremely flat (and minimal) management structure.

Check out this New York Times interview of Jeff Smith (of Qualtrics). I know Jeff, and his mom is our good friend and neighbor... impressive stuff.

http://www.nytimes.com/2013/03...


Permalink
ubm_techweb_disqus_sso_-c1319b5c2a8b278ec6190de7d1d46918
2014-02-05T23:11:12

Whether or not you like the article, one thing is true. We need to get away from the notion of 'software crisis'. This comes from the mistaken belief that software development should be like hardware engineering (hence the term software engineering should be banned as meaningless). Hardware is difficult to change - so we have software which is far more flexible - that is the cause of the problem and the 'crisis'. But it is the very nature of software to be like that. Use that nature for its strength, not its weakness.


Permalink
ubm_techweb_disqus_sso_-c1319b5c2a8b278ec6190de7d1d46918
2014-02-05T23:08:19

I'd say, no the developers are not out of touch - they are the ones in touch with the work. They should be a part of requirements gathering, but too often are kept away from that process by managers trying to maintain a power base - so they could be out of touch in that regard. But that is not their fault.

Managers who can't do the development work are the ones out of touch and this scares them - thus a whole lot of tactics are employed in the work place to maintain this power base.

But management is just a relic of the industrial revolution. Managing should be a part of every person's job, not a separate activity.

http://www.amazon.com/Managing...


Permalink
Tlingit
2014-02-05T22:31:42

Hm. The Chaos Report may well be flawed, but this article doesn't really debunk it.

First, when you set the bar for success so alarming low, obviously you will be able to report more success.

Ambler defines “Failed” as “team did not deliver a solution.” If I paid for a product and the team didn’t deliver a solution, I wouldn’t call that “failed,” I'd call it fraud.

And Ambler defines “Challenged” as “the team did not fully meet all of the project’s success criteria.” If my surgeon set out to do a double bypass heart surgery but ran out of money before he finished and quit there, that’s not “challenged,” that’s flat out failure.

Similarly, “acceptable range” is an awfully low bar for gauging success. I can’t imagine Lexus would have sold many cars with the slogan, “The relentless pursuit of an acceptable range.”

I agree that it is good that Ambler clarified his criteria for success with survey takers, but let’s get real: with the bar that low, even the Ambler Report isn’t that flattering to software developers.

Second, doesn't it really depend on who you are asking/polling? After all, there is a serious conflict of interest inescapably embedded in the idea of asking software engineers if they are usually failing or succeeding at their jobs.

Mobile app stores are overrun with apps that were acceptable to an organization when they were released, yet are not considered successful because the app doesn’t entirely please customers.

In building software, success isn’t meeting a range of criteria, success is pleasing the person paying for it — and our failure to get that is exactly the catalyst that sparked both the agile and lean movements.

The chef can insist all he wants that he is a great cook because he exactly followed the recipe in the book, but like it or not, the proof is in the pudding, so ultimately success will be defined by the consumer.

Third, and please forgive my frankness, but it is hard to take the article seriously when it's tone is so vitriolic. "Various Chicken Littles?" Really?! Name-calling and belittling those with contrary views has made a mess of American politics, so why would someone think it an appropriate way to present professional, data-driven findings? Why not just call the Standish Group "infidels!" and be done with it?

All that said, I don’t know that I totally believe the Chaos Report, but I do know I find the Ambler Report even more of a stretch.

What would be useful would be to identify 200 or so software projects, then survey those who built them and those who paid for them and compare findings.


Permalink
ubm_techweb_disqus_sso_-ee02416db8f4b6755655ea42620959cf
2014-02-05T09:47:12

As alread mentioned in other posts, "success" definition is key. Also the contrary.
There's another fator: Which CIO, Dev. manager, main sponsor or stake holder will accept that a project under its supervision was not succesfull? How many projects are declared as a 100% success, and are a pain in the back for final users for the rest of their labour-lives?


Permalink
ubm_techweb_disqus_sso_-69089370f2ecad3a8f3e7eaff94e9133
2014-02-05T07:54:36

I big part of the problem is defining success. Many times in the waterfall model the time need to make the project a success is underestimated and no one can make up there minds what they want during the
process. In the end projects are often over-budget and don't really solve the problem because no one really understood the problem well enough. The project is then declared a complete failure when the people involved are laid off and a new CEO comes in and demands a new solution.

Agile has the advantage is allowing things to change during the development but put the responsibility back in the hands of the stake holders. This has the
effect of helping the stake holders understand the real time required to get done what they feel is important and makes them put more effort into deciding what they really need. Success can be a project that does not deliver every bell and whistle but does what is most important. Addition features can then be added in a following up project.

In my career spanning more than 20 years I have been a part of two major failures. The first was a technical mess from the start with faked demos to
fool management used at every deadline until it all came crashing down. The second was going well in the software side of the project, but management over
spent advertising a product that did not exist except in a very small market due to resistance from their competition. The resistance should have been
expected from the beginning. When new management came in the project was killed and everything was considered a failure even thought the software side had met its goals.

Today the company I worked for does not have the solution they were working on but so many others do because they approached the problem in a smarter way. However, while working for the same company as described in failure 2, I participated in of dozens of success smaller projects.

Statistics are always malleable to proving any point you want to make if it pick your data points the right way.


Permalink

Video