Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Channels ▼

Walter Bright

Dr. Dobb's Bloggers

C++ Compilation Speed

August 17, 2010

I often hear a complaint that C++ code tends to be slow to compile, sometimes even taking overnight. Slow compiles were one of the motivations for exported templates, and are even listed as one of the reasons for the development of the Go language. It's a real problem, and since I'm in the C++ compiler business, I get asked about this. Why is C++ compilation slow? As it's reasonable to assume that C++ compiler implementers are pretty good at writing fast code, there must be something inherent in the language. C++ compilers do vary widely in their compilation speeds. But that isn't the whole story, since other languages routinely compile orders of magnitude faster, and it can't be true that the good compiler guys only implement other languages (!).

I've been working on C++ compilers since 1987. Back then, machines were extremely slow relative to today, and I paid enormous attention to trying to make the compiler fast. I've spent a lot of time doing performance profiling and tweaking the guts of the compiler to make it fast, and found what aspects of the language slow things down.

The reasons are:

  1. The 7 phases of translation [1]. Although some of these can be combined, there are still at least 3 passes over the source text. At least I never was able to figure out how to reduce it below 3. A fast language design would have just one. C++0x exacerbates this by requiring that trigraph and \ line splicing be unwindable to support raw string literals [2].

  2. Each phase is completely dependent on the previous one, meaning that there's no reliable way to look ahead and, for example, look for #includes and fire off an asynchronous read in advance for them. The compiler cannot look ahead to see if there's a raw string literal and so not do trigraph translation, it must do the trigraphs, and keep some sort of undo list. I've never figured out a way to parallelize C++ compilation other than at the gross level that make provides with the -j switch.

  3. Because #includes are a textual insertion, rather than a symbolic one, the compiler is doomed to uselessly reprocess them when one file is #included multiple times, even if it is protected by #ifndef pairs. (Kenneth Boyd tells me that upon careful reading the Standard may allow a compiler to skip reprocessing #includes protected by #ifndef pairs. I don't know which compilers, if any, take advantage of this.)

  4. There's a tendency for source files to just #include everything, and when it's all accounted for by the compiler, there's often a truly epic amount of source text that has to be processed for every .cpp file. Just #includeing the Standard <iostream> results, on Ubuntu, in 74 files being read of 37,687 lines (not including any lines from multiple #includes of the same file). Templates and the rise of generic programming has exacerbated this, and there's increasing pressure to put more and more of the code of a program into header files, making this problem even worse.

  5. The meaning of every semantic and syntactic (not just lexical) construct depends on the totality of the source text that precedes it. Nothing is context independent. There's no way to correctly preparse, or even lex, a file without looking at the #include file contents. Headers can mean different things the second time they are #included (and in fact, there are headers that take advantage of this).

  6. Because of (5), the compiler cannot share results from compiling a #include from one Translation Unit (TU) [3] to the next. It must start all over again from scratch for each TU.

  7. Because different TUs don't know about each other, commonly used templates get instantiated all over again for each TU. The linker removes the duplicates, but there's a lot of wasted effort generating those instances.

Precompiled headers address some of these issues by making certain simplifying assumptions about C++ that are non-Standard, such as a header will mean the same thing if #incuded twice, and you have to be careful not to violate them.

Trying to fix these issues while maintaining legacy compatibility would be challenging. I expect there to be some significant effort to solve this problem in the C++ standard following C++0x, but that's at least 10 years out.

In the meantime, there isn't much of a solution. Exported templates were deprecated, precompiled headers are non-Standard, imports were dropped from C++0x, often you don't have a choice about which compiler to use, etc. Effective use of the -j switch to make is the best solution out there at the moment.

I'll do a follow on about language design characteristics that make for high speed compilation.


[1] Paraphrased from C++98 2.1, the seven phases are:
1. Trigraph and Universal character name conversion.
2. Backslash line splicing.
3. Conversion to preprocessing tokens. The Standard notes this is context dependent.
4. Preprocessing directives executed, macros expanded, #includes read and run through phases 1..4.
5. Conversion of source characters inside char and string literals to the execution character set.
6. String literal concatenation.
7. Conversion of preprocessing tokens to C++ tokens.

[2] The example in the C++0x Standard is at 2.14.5-4:

const char *p = R"(a\
assert(std::strcmp(p, "a\\\nb\nc") == 0);

[3] A TU, or Translation Unit, is typically one C++ source file that usually has a .cpp filename extension. Compiling one TU results in one object file. The compilation process compiles each TU independently of any other TU's, and then the linker combines the object file output of those compilations into a single executable file.


Thanks to Andrei Alexandrescu, Jason House, Brad Roberts and Eric Niebler for their helpful comments on a draft of this.

Related Reading

More Insights

Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.