Channels ▼


Destructors Considered Harmful

Edsger Dijkstra's famous 1968 letter to the CACM, best known as "Go To Statement Considered Harmful," argued that statements such as goto, despite their power, had hazards that programmers would do better to avoid. This letter started a programming revolution, to the point that today, transfers of control are almost always part of more restrictive control structures such as if or while statements.

C++ destructors have their own hazards, with nature and causes that are analogous to those of the goto statement. This article argues that programmers should seriously consider restricting the use of destructors to data structures that are intended to avoid those hazards.

Dijkstra and Structured Programming

Edsger Dijkstra's 1968 letter to the Communications of the ACM (CACM) was originally entitled "A Case Against the GO TO Statement." The CACM retitled it "Go To Statement Considered Harmful." The document is generally credited with starting the Structured Programming revolution, which so affected how developers program that languages today such as Python and Java do not even have goto statements.

When asked why goto statements are harmful, most programmers will say something like "because they make programs hard to understand." Press harder, and you may well hear something like "I don't really know, but that's what I was taught." For that reason, I'd like to summarize Dijkstra's arguments.

People understand static relationships better than dynamic relationships because the latter requires their understanding to change constantly. Therefore, it is easier to understand how a program works if we can express that understanding in terms of properties of the program, rather than of the program's execution. This former understanding stays the same until someone rewrites the program, but the latter might have to change each time we run the program.

To understand a program's properties means to be able to point to a place in the program and think: "Whenever the program gets to this point, certain conditions are true." For example:

     if (n < 0)
           n = 0;

Assuming that n is a variable of a built-in numeric type, we know that after this code, n is nonnegative.

Suppose we rewrite this fragment:

	if (n >= 0) goto nonneg;
	n = 0;
	nonneg: ;

In theory, this rewrite should have the same effect as the original. However, rewriting has changed something important: It has opened the possibility of transferring control to nonneg from anywhere else in the program. In other words, even though rewriting the fragment may not have changed its behavior, we no longer know that when control reaches the statement after this fragment, n is nonnegative.

Another way in which goto statements make it hard to understand programs statically is by making it hard to describe how much progress a program has made during its execution. If a program includes only limited control structures such as if and while, then we can talk about which path each if has taken and how many times each while has executed, and build up from this information a complete picture of the program's execution history. Such a picture lets us understand programs by making claims such as "The number of iterations of this loop is the same as the number of input records so far." goto statements complicate keeping track of the program's execution, because there is no limit to how much executing a single goto can change the future path through the program. As a result, the program's execution history becomes much harder to describe.

Both of these hazards of goto statements affect our ability to think about programs, particularly our ability to understand a program's dynamic behavior by making static claims about it.

Allocating Resources Is Safer Than Freeing Them

Now let's look at pointers. C++ programmers learn that pointers are dangerous because they might not point to valid memory — but they don't learn why pointers wind up not pointing to valid memory.

There are two fundamental reasons that pointers might be invalid. First, the pointer might never have been initialized. This problem is not unique to pointers: Any variable of a built-in type can have a garbage value if the programmer forgets to initialize it. The unique hazard of pointers happens this way:

  1. A program allocates dynamic memory and uses its address to initialize a pointer.
  2. The program frees the memory, leaving the pointer with its previous — now invalid — value.
  3. The program tries to use the — now unallocated — memory to which the pointer points, and crashes.

It is important to realize that (1) is safe! It is not until (2) that there is any possibility of a problem. The hazard comes not in allocating memory, but in freeing it. Once a pointer points at correctly allocated memory, it will continue to do so unless that memory is freed. Of course, unless we have more memory than we need, we must free it eventually; when we do so, we must be sure that there are no other pointers to that memory.

This kind of assurance is similar to the assurance that we lost when we added a goto statement to our earlier program. In effect, freeing memory is like transferring control, and pointers are like labels: When a program frees memory (transfers control), we must make certain that every pointer (label) that might inappropriately point to that memory (be the target of the transfer) does not do so in this particular case.

Related Reading

More Insights

Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.



I'dd like to pose the opposite statement: Lack of destructors considered harmful'. You can't do RAII without destructors.
Without RAII, resource management becomes transitive to composition. With resource management being transitive to composition any object composed using a resource that needs manual resource management 'becomes' a resource that needs manual resource management. Add polymorphic interfaces to the mix, and any polymorphic class hyrarchy where one 'future' implementation 'might' use 'deep' resources, also 'becomes' a resource.

When you state : "it is possible to encapsulate them in classes that do nothing but allocate and free resources", you are absolutely correct, but these classes are the RAII classes you speak about, or for memory resources, scoped_ptr. !!!Not!!! not 'shared_ptr', shared_ptr is meant for the 'special' case where the lifetime of an object can not be bound to one specific scope. If this special case happens a lot in your program, most likely you have a major design problem.


The example Andrew uses in his article is part of any good C++ book. It is "the" classical pitfall example. As many others already complained, the title is really misleading and does not match the content. Comparing the goto problem with destructors is not appropriate. Any serious C++ programmer knows, that destructors are executed, when objects go out of life. A C++ programmer may not know from where control flow passes to a label via goto, but he knows (or should know) the rules about life time of objects. The example of the article only says something about the importance of proper class design and best practices like "make classes un-copyable by default", i.e. by using boost::noncopyable. And if You need copy-behaviour, implement it carefully. That is already written by Scott Meyers (Effective C++) and others years ago and again any serious C++ programmer knows that. However every programmer, who uses a well-designed class gets rid of nasty problems like resource life time control AND THAT IS ONLY POSSIBLE, BECAUSE WE HAVE DESTRUCTORS!

However, the introduction part of the article about the "goto" discussion has been very informative for me, as I didn't know the historical back-ground.


I disagree with this article. If we all go to shared pointers only to forget correct resource managment, we will go into another problem: what does a copy mean? It's a deep copy or a plain copy?
All Java students I met at some point in time crash on this: the do not really know what they are really doing whan making a=b


The title is counterproductive at best. It puts readers in a wrong mindset to read the article and get the intended message. Whatever that message is...

"Allocating dynamic resources may seem hazardous — but freeing them is the real hazard."
No. A crash from deallocation is merely a symptom of resource mismanagement; the problem is already there, and may have manifested itself in many ways before you free it and see the symptom.

The best perspective on the problem comes from invariants.

Invariant is a word referring to a specific fact of life; the common subset of all individual member, friend and destructor functions' pre-conditions about the state of the object (including its members and relatives). If a class cannot have a consistent invariant, like mutually exclusive requirements, there's a design issue; such a class will never be safe to use. Any time within that object's lifetime, the object state must follow the constraints of the invariant. If an invariant is "broken", meaning the object does not comply with the constraints of the invariant, the object is not safe to use. (Best case it would require a specific function calling order not to crash. Now imagine an exception in the mix.) Or to destruct. Non-compliance is ok during a nothrow sequence of changes that ends in compliance - not all changes can be done atomically, after all.

Destructors obviously put requirements on invariants. The destructor in the page 2 example requires the invariant to contain "I'm the sole owner of the pointee"; which it's up to the rest of the class to maintain. From an invariant perspective this article scapegoats the destructor for being where the problem becomes visible (shoot the messenger style), rather than where the invariant was broken. In particular, there's no way to change (only) the destructor to make it safe to call under those conditions.

All "crashes long after the bug" errors are practically by definition a broken invariant.

The only way I can fit the goto analogy in this: like every goto marker is a potential entry point into a section of code, each function is an entry point into an object. Reading between the lines, you can come to this conclusion: The number of related functions whose pre-conditions make up the invariant, and whose post-conditions have to comply with the invariant, can grow unmanageable. This may have been what the author intended to get at.

The page 3 solution is, in a nutshell, outsourcing maintenance of that part of the invariant to another class, thus simplifying its own workload. I can get behind that.


I wish I could like this many times. Garbage collection is a solution for a single type of dynamically acquired resource, which make managing all the others much harder.


C++ is utter junk. Articles like this just emphasize this; not that emphasis is necessary, of course. Computers are so fast these days that we should not be fumbling around on our hands and knees writing stupid memory management code. Same goes for Objective-C for iOS. My vote is to move to Ruby: it's wonderful. Unfortunately, we do not have a Ruby compiler. Until we do I, like so many others, will have to put up with C++, something I've been doing for way too long: it wastes my precious time and it bores me rigid. Boo-hoo :-))


This isn't about destructors. It's about programming practices.

In a previous position it was mandatory that all classes used a NOCOPYDEF(Classname) macro that created unimplemented copy constructors and assignment operators. This macro was only to be removed when implementing them later or accepting the defaults as suitable.

I don't agree with shared_ptr as a solution so great to warrant this article, but it CAN be the right solution. The destructor is not a "problem," it has caveats and it's company culture that should determine how the downsides and risks are handled.

If the article would have discussed considerations involved with using weak_ptr as well I think my take may have been different.

Oh, thanks for the Koenig look-up.


I appreciate the value of smart pointers.
You article though doesn't seem to promote the benefits of their use, so much as denigrate the use of destructors.

I'm sure you're aware that destructors are useful for handling the cleanup of other kinds of resources too. Not just memory.

The impression I got as I read your article, is that you're arguing that smart pointers can be used to compensate for poor coding practices. You provided a list of bad things to put in your code, then presented smart pointers as the solution.

It's my experience that folks that have poor programming habits to begin with, can find ways to break smart pointers too.

I like smart pointers. They are extremely useful in cases where you have multiple collections of data, sharing objects with no clear single point of ownership. I would never recommend they be used as a band-aid over bad code.


I think the more appropriate title would be "Proper Object Design". I appreciate the issue that the article points out. I do not agree with that a standard solution solves the issue - just like removing "goto" did not solve the path tracing issue.

I have looked through too many programmers's code and found:
- functions with multiple return statements
- the use of break and continue in a for or while loop
- and switch statements that do not have break commands between every case.

All of these are variations on the classic goto command and provide problems when tracing code execution. In some cases formatting and commenting can resolve this issue. But how many programmers take the appropriate time when commenting their code?

I come back to the main lesson I have learned through my years of programming:
"A programming language does not bestow better design, fewer bugs, or legibility. In the end, there is no substitute for proper design, design documentation, and proper commenting. These can only be achieved through understanding the language and how the language and runtime libraries work."


The real problem is the reuse of 'th' after 'foo(th)' returns. That this reuse is destruction in your example is only coincidental. It is much simpler to just zero-out the pointer being freed, in the destructor: '~TheThing(){ if(p) free(p); p=NULL;}', in order to guard against superfluous destructions. Unfortunately this means that the object will become unusable after the destruction of its _first_ copy. _This_ is the problem which the use of 'shared_ptr<>' is called to correct, making sure that the destruction really happens only for the _last_ copy in use.


Is that it!? This is a non-article, with nothing to say, written to fill copy-inches. Naked pointers are already well know to be the
exception rather than the rule in C++, and by extension to other
resource handle types too. Disappointing.


Having just spent many days fixing massive leaks of unmanaged resources in a large C# code base, I couldn't let this pass without comment. Garbage collection, at least as it's implemented in Microsoft's CLR, trades a simple problem (understanding resource ownership) for a much larger and more intractable one (the inability to encapsulate control of resource lifetimes). As in many aspects of life, magic is a poor substitute for understanding.


I think it's not so much destructors that are harmful as raw pointers. It's a long time since I had to use a raw pointer in C++, since it's almost always more appropriate to use a collection class to manage storage, combined with iterators for access. If I do need to use something more like a pointer, one of the standard smart pointers (such as Andrew advocates) is a much better choice than a raw pointer managed by the constructor/destructor.


In the example. The problem is not with the destructor, but with the constructors. The default constructor behaved as if it had sole ownership of the resource, and the copy constructor behaved as if it was unowned and shared (so two instances of the same class could be fundamentally different). Either you want value semantics, or you want reference semantics. You cant have both. Destructors only force us to think about such double nature.


You might as well say "most of C++ considered harmful" ;-) I mean, you can shoot yourself in the foot with shared pointers too, and of course even initialisation can go awry as then you have the all of the problems of exception safety. It sounds to me like attempting to make an unsafe language safe is hiding the subtlety in deeper and deeper layers. That said, good article, thank you. If anything, the article makes the case for garbage collection enabled by default.


It is nice that you bring this topic up. Classes which does not handle copy and assignment coupled with destructors could lead to some nasty bugs. A rule of thumb which have often saved me in these matters is the rule of three:


That title had me worried for a moment, but I strongly agree with the article's content. It's when one tries to get too clever with resource management that things go awry. Trying as much as possible to get each member of a class to look after itself keeps things much cleaner, and allows the default methods (copy, move, dtor) to do the right thing.

So going back to the title: Dijkstra's article rightly put an end to gotos in most modern code, but as you point out, people often don't know why. I would hate to see people recall "destructors are harmful, I mustn't use one of those". If the result was an aversion to RAII in C++ we'd have really taken a step backwards.

Destructors in C++ help to solve a modern equivalent to the goto problem: giving predictable state to an object at any point within a body of code, even in the presence of exceptions. The guarantee that the destructor will be called avoids delegating the cleanup to the user of an object - look at all the try..catch..finally blocks one must write get the same effect in Java.


I'm not telling people to avoid destructors; I'm telling people to control them.

Destructors are like labels: When you write one, you have to be sure that however you get there, you have established the preconditions that the program expects at that point. In the case of destructors, that means that you have to ensure that every constructor establishes the preconditions.

Programmers eventually learned to avoid the hazards of labels by using control structures instead that limited the number of ways in which control could pass through each part of the program. Similarly, I think that programmers can benefit from encapsulating the deallocation of resources that destructors generally accomplish into classes such as shared_ptr that ensure that each destructor has the proper preconditions.


Hmm. "Destructors Considered Harmful" is a misleading headline, I think. I'd read the thrust of the article as "understand why things are considered dangerous, don't just use a rule of thumb". C++ is a complex and powerful language, and like all complex and powerful tools it needs to be understood by those who use it, or it'll bite.


I have to disagree with the premise of the article. You can easily write a simple test program to show that you can avoid all these issues by either writing your own copy constructor that does a proper deep copy, or just making sure you always pass by reference or pointer (I consider this lazy). This is stuff they teach every beginning student and I've never seen anyone have a problem with it. If you need to pass a copy of an object, copy the whole object. If you don't need a copy, pass it appropriately.

Telling people to avoid destructors in C++ is a step backwards. While I can kinda see what you're doing and feel its valid on its own (and not as a substitute to a properly designed class), if you want to avoid writing copy constructors, use value passing semantics, and manage pointers and the memory they share via reference counting, use Java or C# and take advantage of the benefits they offer. Forcing C++ to be those languages isn't beneficial to anyone.