Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

C/C++

Measuring C++ Program Efficiency

Derek Truong and Allen Chan

, October 01, 2000


Oct00: Measuring C++ Program Efficiency

Derek is a member of IBM's VisualAge C++ Kernel Development group and can be contacted at [email protected]. Allen is a member of the Distributed Debugger group and can be reached at allen.chan@ ca.ibm.com.


Software metrics are an important part of the software-development cycle because they provide feedback on the areas that need to be improved. For structured-programming languages like C and Pascal, metrics such as lines of code and cyclomatic complexity, which measure the size of a system and complexity of a function, respectively, are predominant.

With the adoption of object-oriented languages such as C++ and Java, however, a set of metrics is required that better captures the design quality in an object-oriented system. Many researchers have proposed different metrics to address this problem. For example, in Object-Oriented Software Metrics (Prentice Hall, 1994), M. Lorentz and J. Kidd describe more than 30 design metrics while D. de Champeaux cites another 29 in Object-Oriented Development Process and Metrics (Prentice Hall, 1997). The use of all these measures can be overwhelming and overlapping. Consequently, understanding which metrics to use, and what information is gleaned from them, is critical. A tool that provides an immersive environment for measuring stuff about a program is clearly necessary, because you cannot necessarily assume that the programmer is an expert on all of them. The huge number of metrics demonstrates the value of integrating a metrics tool into a software-engineering environment.

In their paper "A Metrics Suite For Object-Oriented Design" (IEEE Transactions on Software Engineering, 1994), S.R. Chidamber and C.F. Kemerer propose a more manageable approach -- use a small suite of metrics that succinctly characterizes the system being measured. The suite they proposed consists of metrics satisfying very specific criteria: The metrics must have a theoretical foundation, they must be implementation-technology independent (that is, they can be used with C++, Smalltalk, Java, and so on), and they must be collectible using automated tools. In total, Chidamber and Kemerer propose six metrics: Weighted Methods Per Class, Depth Of Inheritance, Number of Children, Coupling Between Objects, Response for a Class, and Lack of Cohesion in Methods.

Using their work as a starting point, we designed and implemented an interactive metrics tool that integrates into a graphical IDE, making automated metrics gathering a natural part of the edit/compile/ debug/browse cycle. In this article, we will describe the design and use of this tool. The complete source code to the tool is available electronically (see "Resource Center," page 5), as well as source code to the example programs we use to test the testing tool. It is important to note that this metrics tool is not a stand-alone tool; it is designed to be integrated into an IDE.

Normally, creating a metrics tool requires, at a minimum, developing a complete parser and preprocessor that parse the subject program, a semantic analyzer to build up the symbol table, and a mechanism that allows access to this parsed information in a form that can be easily queried. For an OO language such as C++, which is notorious for its complicated syntax (templates, for example), the task of building or even modifying an existing parser is daunting.

At the outset, we decided that for the software metrics to be truly useful, the measuring and analysis of the metrics must be part of an immersive development experience. As a result, we developed the tool as an extension to IBM's VisualAge C++ Professional Version 5, through the use of the CodeStore API (see "The Architecture of Montana: An Open and Extensible Programming Environment with an Incremental C++ Compiler" [Proceedings of Foundations of Software Engineering, 1998], by Michael Karasick). The VAC++ V5 tools framework is extensive, flexible, and powerful. The CodeStore API enables many different types of tools to be built, ranging from those for querying symbol table information or customizing the compilation process, to extending the compiler functionality to handle different types of files (for instance, embedded SQL files). Using the flexibility of the IDE framework in conjunction with the CodeStore APIs, it is possible to build a metrics collector tool that queries the declarations and statements stored in the CodeStore (a persistent information repository residing within the development environment) and displays the results.

Tool Requirements

To ensure that the right tool is built, you must clearly understand the requirements that constitute a useful metrics tool. The six key requirements are:

  • Display statistical information for the entire project such as median, minimum, and maximum of the different software metrics. Developers must be able to quickly determine the characteristics of a system. Having the statistical information available enables immediate identification of anomalies and mean behavior for each metric.

  • Query metrics to identify a specific problem class or function. Developers must be able to ask questions about the system such as, "Which classes are the most likely to be high maintenance?" or "Which classes are the least likely to be reused?" The tool must return relevant information, ordered from most important to least, that can be used as the starting point for investigation.

  • Display detailed metrics for a given class or function. In addition to system-wide information, a metrics tool must be able to report specific metrics for a given class or function. These details must work seamlessly in conjunction with the querying capabilities to support fine-grained analysis of class or function characteristics.

  • Collect and display metrics automatically after each build. Metrics collection and display must be integrated into the edit/debug/compile/browse development cycle. There must not be a separate, cumbersome phase to collect this information.

  • Integrate with other tools inside the IDE. The metrics tool must be a good citizen within the IDE so that it exports information that other tools can use. This is critical to enable the results to be processed by other powerful IDE tools such as Find Uses.

  • Handle error scenarios elegantly. Compile errors are typical during the course of development. The tool must display metrics based on information available in the API at the time the compile error was detected. When the compile errors are fixed, the tool must automatically refresh itself with the most up-to-date information.

User Interface Design

Figure 1 shows the user interface of our metrics tool. The Query Metrics view is beside the Project Metrics view, with the Detail Metrics view at the bottom right. There is a fourth view called the "Details view" (provided by the VisualAge C++ IDE), which is used to pretty print the details of a class or function.

Both the Query Metrics and Project Metrics views have the CodeStore object as the input object. The Details view and Detail Metrics views take as its input the selected object of the Query Metrics view.

The main motivation for this configuration is that the tool should let users get an overview of the project first, then allow for specific questions to be queried about anomalies that are noticed.

Users typically look at the Project Metrics view first to determine characteristics of the project; for instance, size is determined by looking at the total number of classes or lines of code. Next, looking at the statistical information (minimum, maximum, and so on) can reveal classes that are far from the average. From there, this information can then be used in the Query Metrics view to ask what class has the maximum or minimum metric value. Finally, the results of the query flows into the lower two panes where details can be examined.

Metrics Implemented

The tool implements a number of metrics. For each metric, we'll explain its purpose and possible interpretations, then provide a high-level description of how the metric information is gathered.

  • Number of classes. This metric is an indicator of system size. It also points to how object oriented a system is. For example, if a system has 100 KLOC, but only a few classes, it is a pretty good indication that the code is primarily function oriented.

  • Number of lines of code. This is the traditional metric to gauge a system's size. It is implemented by iterating through all source files in the system, and counting the preprocessed lines. Since this metric operates on preprocessed source files, comments are not included in the count.

  • Number of member functions. The number of member functions is an indication of the complexity of a class. Used as a system total, it is also an indirect measure of the system size and object orientedness. Statistical information such as minimum, maximum, and median is also computed for this metric. A maximum value that is very far from the median points to classes that contain lots of functionality and may be candidates for simplification.

  • Number of virtual functions. The number of virtual functions is an indication of the degree a class is used as an interface class. A low value for a given class suggests that it may not be a useful interface class and should be consolidated with another class.

  • Number of pure virtual functions. The number of pure virtual functions gives an indication of how abstract a class is. The presence of one or more pure virtual functions means that this class cannot be created and can only be used as an abstract class.

  • Depth of inheritance tree. The depth of an inheritance tree is a measure of how many ancestor classes can potentially affect this class. Essentially, the deeper the tree, the more complex the class due to the number of inherited functions. For example:

class A {...};

class B : public A {};

class C : public B {}; // C has an inheritance // depth of 2

  • Number of children. The number of children is the number of immediate sub-classes subordinated to a class in a class-hierarchy. The larger this number, the greater the reuse. But potentially, it can also suggest improper abstraction of the parent class.
  • Coupling between object class. Coupling between object classes counts the number of other classes that are used within the member functions of a given class. Too much coupling indicates lack of reuse potential in other applications. It is implemented by iterating through a classes member functions, expanding the function bodies, walking the parse tree, and counting the number of other classes that are used. This is a very expensive metric to gather because of all the analysis that is involved. For example:

class A {

public:

void foo() { B b; } // Coupling with B

};

  • Number of nontemplate-dependent functions. This C++-specific metric counts the number of functions in a template that do not rely on a template parameter. The idea is to be able to identify these functions, and move them up to a base class that can be shared by other instantiations, thereby reducing template code bloat.

Use Scenarios

At least in the CodeStore system, a metrics tool cannot be used until information is obtained about the program. Consequently, a build (compilation) must first be done. The following scenarios show how to use the different metric information to infer knowledge about the complexity of a program.

Determine size of the system.

Query: What is the size of the system?

Strategy: Find the number of lines of code, number of classes, and number of methods by browsing the Project Metrics view: Project totals are displayed including lines of code, number of classes, number of methods, number of pure virtuals, and number of virtuals.

Determine classes that are difficult to reuse.

Query: Which classes are candidates for redesign to improve reuse?

Strategy: Examine the statistical information in the Project Metrics view; look specifically at the mean and maximum values for the Class Coupling metric; in the Query Metrics view issue a query for Class Coupling. The tool returns the classes that rely most on the other classes (thereby making reuse difficult).

Determine classes that are hardest to maintain.

Query: Which classes are hardest to maintain? (One measure of difficulty is the complexity involved for maintainers in understanding the code. Generally, the more member functions and the greater the depth of inheritance trees, the more difficult because of the many potential execution paths.)

Strategy: In the Query Metrics view, issue a query for the number of member functions; copy the results into the clipboard; issue another query for depth of inheritance tree; compare the two lists. The top common classes are the ones to examine in further detail.

Determine template classes that contribute to code bloat.

Query: Which template classes can be rewritten to reduce code bloat?

Strategy: Identify member functions in template classes that are not template dependent so that they can be factored into a base class. This is done through the Query Metrics view: Issue a query for the number of nondependent template member functions; select the template class object from the Query Metrics view, and let this object be exported to the built-in IDE Find Uses view. (This view is part of the IDE. It accepts a declaration and returns all the places where it is used.) Look at the number of instantiations to determine if it is worthwhile to factor out nontemplate-dependent functions.

Examples

The following examples demonstrate the utility of the metrics tool through the collection and analysis of data for C++ programs. In the analysis, the data is first presented in two tables: One for the project metric totals, and the other for statistical information such as minimum, maximum, and median. These examples should give you ideas as to how the data could be interpreted to allow developers to quickly assess the system.

The programs we chose to analyze were selected primarily to illustrate that use of the tool in the context of a small application and a large one. (An application using more pure virtual functions and more inheritance would have been interesting to analyze, but we felt that the two we selected sufficiently illustrate the utility of the tool.) Further, the data gathered for the two example programs were completed in less than 1 percent of compile time. However, this was achieved only by disabling the Coupling Between Objects metric, which causes significant slow downs due to the complexity of the computation; namely, for each class, every member function must be reparsed (the compiler does not store this information in memory, it is computed on demand) and analyzed. With this metric enabled, the collection process takes about 3 percent of the compile time.

gperf

gperf (http://ftp.cnr.it/gnu/gperf/) is a public domain C++ program that implements a perfect hash function generator. It transforms an n element user-specified keyword set W into a perfect hash function F. It is a small C++ program with 5000 lines of highly portable C++ code that does not use templates or exception handling. Tables 1 and 2 show the results of the metrics gathering for the gperf project.

What do these metrics reveal about the gperf project? First, the size of the project can be quickly known: There are over 4700 lines of code, with 29 classes in total and 294 member functions. When looking at a project for the first time, this information is very useful to provide a rough estimation of the complexity of the code.

Next, notice that there are 18 virtual functions, but none of them are pure virtuals. The presence of virtuals indicates that dynamic binding is being utilized. Eighteen out of the 294 member functions are virtual, which is about 6 percent of the total. Had this number been much larger, it would have been an indicator of potential performance problems because calls through virtual functions are more expensive than regular function calls.

Certain anomalies are immediately obvious in Table 2. For example, there is a class that has 64 member functions in it while the median is 0 (that is, most of the classes are just structs without any member functions). This class is definitely a candidate for further investigation as it accounts for about 22 percent of all the member functions. There may be too much functionality packed into this class, making it hard to use. Identifying this class for further examination can easily be done by using the Query Metrics view to search for all classes with the number of member functions equal to 64.

Further, examining the depth of the inheritance tree metric reveals that the maximum depth is only 3. This indicates that the inheritance tree is shallow and consequently less complex. Combine this with the observation that the maximum number of children for a class is 3 with the median being 0. This indicates that there is not much reuse via inheritance. Once again the Query Metrics view can be used to identify the classes with specific metric values.

Jikes

Jikes is a large C++ program consisting of 150,000 lines of code (http://www.software .ibm.com/alphaworks). It implements a Java compiler and makes extensive use of templates and exception handling. Tables 3 and 4 show the results of the metrics gathering.

What does the data indicate about the character of the jikes project? It is immediately apparent that jikes is a much larger project than gperf. It has 150,000 lines of code and 262 classes, making it over an order of magnitude larger than gperf. There are 2679 member functions, 384 of which are virtual, but none pure virtuals. This proportion is roughly the same as gperf indicating that dynamic binding is used but there are no abstract classes.

Examining the statistical metrics, there is a class that has overwhelmingly more member functions than the median value with 243. It is highly likely that this is a central class and that a significant part of the project's functionality is encoded within the member functions of this class.

Looking at the depth of the inheritance tree shows that this is a shallow hierarchy with the maximum being 3. On the other hand, there is a class that has 29 children. It is likely that this class is an interface class that defines common member functions for other classes.

Conclusion

When integrated into an IDE, the C++ metrics tool presented here provides developers opportunities to infer knowledge from the metrics as part of the development experience. The different views make it possible for developers to effectively make quick assessments of the project by first examining metrics on a project-wide basis, then drilling down to analyze details of classes which have metric values that are far from the median.

Acknowledgment

We are grateful for the many thoughtful comments and suggestions from Leigh Davidson, Frederick Gandolfi, Steve Hikida, Jeaan-Paul Hoffman, Derek Inglis, Michael Karasick, Sandor Mathe, Sean Perry, Jamie Schmeiser, and David Wortman.

DDJ


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.