Channels ▼
RSS

Web Development

Automated Metrics and Object-Oriented Development

Source Code Accompanies This Article. Download It Now.


Dr. Dobb's Journal December 1997: Automated Metrics and Object-Oriented Development

Using QMOOD++ for object-oriented metrics

Jagdish is a lecturer in the computer science department at the University of Alabama in Huntsville. Carl is chairman of the department. They can be contacted at jbansiya@cs.uah.edu and cdavis@cs.uah.edu, respectively.


Tools for object-oriented metrics are essential in real-world software development. For metrics to be actively used, however, they need to be automated, easy to use, and flexible enough to meet different requirements and goals. To this end, we have developed Quality Metrics for Object-Oriented Development (QMOOD++), an automated tool that supports a suite of over 30 object-oriented metrics. In addition to making it easy to collect metrics data, QMOOD++ has a repository in which the metric data of analyzed projects can be stored and retrieved later for comparisons. Figure 1 shows the key components of the QMOOD++ architecture. QMOOD++ is a comprehensive, multiuser, multithreaded, integrated Windows 95/NT-based tool. An executable of the program and sample files are freely available at http://indus.cs.uah.edu/ and from DDJ (see "Availability," page 3).

Typically, object-oriented software development is iterative, with overlapping phases in the development process. While the basic set of objects, operations, attributes, and relationships are identified in the analysis phase, the details of a class's methods, parameters, data declarations, relationships, and algorithms are resolved during design. The results are hierarchies of well-defined classes that represent a blueprint for implementation.

Using Metrics

Metrics assess the internal/external structure, relationships, and functionality of software components. The most basic components of object-oriented systems are classes. The interdependence of classes defines the external structure of the system. Relationships among classes define the paths of communication between objects of classes. The organization of class relationships, such as is-a or consists-of, allows for the sharing of functionality and attributes. Member functions of a class define the services a class supports (and its interactions with other objects), while the member data of a class defines the internal structure of the class's objects. An evaluation of a class's definition for its external relationship (inheritance type) with other classes, as well as an evaluation of its internal components, reveals significant information that objectively captures the structural and functional characteristics of a class and its objects.

Object-oriented development can be analyzed and monitored at both system and class levels. At the system level, the overall architecture (that is, the external structural characteristics of the system of classes) is analyzed. The components of the system that are evaluated at this level include classes, class hierarchies, and the relationships between classes. Tables 1 and 2 describe the metrics we frequently use to examine system-level characteristics of a project.

At the class level, internal/external characteristics of individual and small groups of classes are assessed. Components used in the assessment include methods, signatures of the methods, and number and types of data-attribute declarations in the class. Tables 3 and 4 list the metrics used to track the design and implementation of classes.

The set of metrics used in evaluation cover all constructs used in the creation of an object-oriented system. Unless otherwise stated, the measure of all metrics is an ordinal value greater than 0.0. Metrics marked with an asterisk (*) in Table 2 have a value in the range 0.0 to 1.0. The metrics defined in Tables 1, 3, and 4 are simple counting metrics, which count the number of occurrences of various constructs in classes and system descriptions. We call the metrics in Table 2 "derived" metrics because they combine results from the simple counting metrics using standard statistical parameters such as mean, variance, deviation, and distribution. For instance, the depth of inheritance (DOI) metric shown in Table 3 is a simple metric that measures the level of nesting of a class in an inheritance tree. A derived metric that is more meaningful at the system level is the average depth of inheritance (ADI), which is computed by dividing the sum of nesting levels of all classes by the number of classes. Derived metrics represent collective information about all classes in the system.

The metrics in Table 1 evaluate the characteristics of a design based on a tree (or a graph when multiple inheritance is used). These metrics evaluate the high-level architecture of an object-oriented system design. For instance, the number of classes (DSC) metric along with the average number of methods per class (NOM) metric gives a quick and rough estimate of the "size" and "complexity" of a system. The number of hierarchies (NOH) metric indicates separate and distinct concepts modeled and implemented in a system. The number of independent classes (NIC) metric, number of single (NSI) and multiple inheritance (NMI) metrics, number of internal (NNC) and leaf classes (NLC) metrics, along with the average depth of inheritance (ADI) metric, width of inheritance (AWI) metric, and number of ancestors (ANA) metric, all characterize the external structure of an object-oriented system.

The average depth of inheritance metric (ADI) indicates the degree of abstraction with which a system has been designed. Systems designed and developed with objectives such as high reusability and extendibility (like frameworks for specialized domains) generally have high values (ADI>2.0) for the metric. Systems with flat designs (low measures for NSI, NMI, NNC, ADI, and ANA) use inheritance sparingly. This characteristic can be expected of class libraries that may include functionally unrelated classes. The extent of internal reuse leveraged through inheritance is represented by the measure of functional abstraction (MFA) and the measure for attribute abstraction (MAA) metrics. Since these metrics represent ratios, their values are bounded between 0.0 and 1.0. Higher measures of the MFA and MAA metrics have values closer to 1.0. From an encapsulation perspective, it is desirable that no data attributes of a class be directly accessible by users of the objects. Therefore, a value closer to 1.0 is desirable for the data access metric, DAM.

When analysis and development focus on individual classes, metrics in Tables 3 and 4 provide important information about the classes. The depth of inheritance (DOI), number of children (NOC), and number of ancestors (NOA) metrics (Table 3) aid in understanding the place of a class in the system of classes. These metrics also help you understand the extent of the ripple effect of changes to classes. Classes with low values for the DOI metric and large values of the NOC metric represent "key" classes in the system. Therefore, you should exercise caution when making changes to such classes. The metrics in Table 4 are class internal metrics used to assess the details of class characteristics. The number of methods (NOM), class interface size (CIS), and number of inline methods (NOI) metrics provide information about the importance, functionality, and method complexities of a class.

The difference between the NOM and CIS metrics is that, while the NOM metric is a measure of all methods defined in a class, the CIS metric only counts the methods defined in the public interface of a class. Classes with larger values for the NOM and CIS metrics are functionally important classes.

Sometimes, classes with a large number of methods can represent poorly designed classes used as a general dumping ground for unrelated functionality. This type of problem class can be identified by comparing the number of methods of a class with the average number of methods. It is a good heuristic to examine classes with exceptionally high deviations for the NOM metric value from the system-wide average value of the NOM metric.

Another useful metric is the class size in bytes (CSB) metric. This metric is important to consider for classes from which a large number of objects are instantiated during system run time. In a project to develop a reusable graphics-rendering library, we used this metric to identify and correct a design flaw that resulted in the reduction of the run-time memory footprint of the applications using the library by close to 50 percent. In version 1.0 of this library, it was decided that there was to be only one root for the entire system -- an abstract class MObject. All classes of the library, including a two-dimensional Point2D class were derived from the MObject class. Point2D had only two integer data members to maintain the values for the x- and y-coordinates, thus theoretically requiring only a total of eight bytes for an instance of a Point2D. However, because the class was derived from the abstract class MObject, it required an additional four bytes for the invisible inherited "virtual function table" pointer. The CSB metric calculated 12 bytes for the objects of Point2D, but because of an eight-byte boundary alignment the actual size of the objects allocated was 16 bytes. It is not uncommon for applications using the rendering library to create thousands or millions of Point2D objects. The design was altered in version 2.0 of the library, wherein the Point2D class was made an independent class resulting in the CSB metrics calculated size and the actual allocated size being eight bytes per object. Since Point2D objects are by far the largest number of objects created in an application, this resulted in close to a 50 percent reduction in the run-time memory footprint of the application, which contributed greatly to improving the performance of the application. Without such a metric, the grossly inefficient design of version 1.0 would not have been easily detected.

The direct class coupling (DCC) metric is for assessing the reusability of classes. It measures the dependency of a class on other classes in the system. Dependencies are created by attribute declarations and use of method parameters that are instances of other classes. Classes with large values of the DCC metric are harder to understand, reuse, and maintain than independent cohesive classes. The DAC metric is a specialization of the DCC metric, which only measures attribute-based dependencies.

The class entropy complexity (CEC) metric is an information theory-based metric we developed and validated with several commercial projects. CEC measures the complexity of classes based on their information content. The information content of a class is calculated by determining the frequencies of different information tokens that are used in the definition of the class. A higher value of the metric implies a higher information content, and thus the class is likely to be harder to understand. We used this metric to identify during the design stage which classes were likely to be highly complex. When identified early, complex classes can either be redesigned, or more-experienced developers can be allocated to their development; and a greater effort can be placed on their testing.

It is important that use of metrics in evaluating and analyzing object-oriented systems be clearly understood. Metrics are used for rationalizing and charting the development of a system, rather than as a standard for evaluating the performance or judging the "overall" quality of a system. The values of the metrics are influenced by several interrelated factors, such as the domain of the problem and solution, the objectives and goals set for a product, tools and techniques employed, and the people who produce the product. Therefore, it is not easy to attribute the acceptability or unacceptability of metric measures to any single specific cause. Generally, different acceptable and unacceptable ranges for metric values are required based on the goals and objectives of the people using them and the domains of the systems.

QMOOD++: An Automated Metric Data Collection Tool

We decided to use C++ as the target language for which QMOOD++ parses, collects, and analyzes metric data because C++ is the programming language of choice for industrial and academic software development. QMOOD++ (see Figure 2) automates the process of source selection, metrics data collection, visualization of system structure, and display of results. The tool requires that class definitions (that is, methods with their parameters and attributes that, together, make an object) be represented using C++ syntax. A C++ parser does a syntactic analysis of classes to build an Abstract Syntax Tree (AST). The AST is then traversed to collect the data used in calculating the metrics. Metric measures are provided for classes, groups of classes (clusters), and the overall system architecture. The tool supports all of the object-oriented metrics in Tables 1 through 4 and several other additional metrics not described in this article. QMOOD++ allows for calculated metric values to automatically be compared to other versions or systems that have been previously analyzed.

We have used QMOOD++ to collect metric data and analyze more than 50 large commercial and academic object-oriented systems from different sources and serving different objectives. Several systems analyzed had between 150 and 400 classes.

The ad hoc or indiscriminate use of metrics can lead to erroneous conclusions. Typically, product metrics are influenced by the domain for which the software is developed. Therefore the values of the metrics can be significantly different for systems from different domains and in some cases for projects within a domain. Any comparisons between the computed values of the metrics should be done only between projects that have been developed for similar requirements and objectives or have comparable solutions.

Microsoft's Foundation Classes (MFC) and Borland's ObjectWindows Library (OWL) are two Windows frameworks we have used as representatives of commercial object-oriented systems in several metrics-based studies. The periodically released versions of these commercial frameworks, developed over a period of many years, provided readily available projects that address similar requirements; therefore, their metric results can be compared.

Using the metrics in Tables 1 through 4, we used QMOOD++ to evaluate the five publicly released versions of MFC and three versions of OWL. Table 5 describes the metrics data we collected for the eight releases. The systems were analyzed using the publicly distributed header files (*.h) that contain the definitions of the classes that constitute the system.

Usually, the main reasons for the release of new versions of existing software are to add new features or fix bugs. The early versions of new software are generally "features" releases as the software is modified to enhance capabilities and add new features or incorporate additional requirements. The initial structure of systems are also generally unstable and can undergo significant reworks during the first releases.

The metric data in Table 5 contains significant results and trends for the characterization of domain-specific reusable frameworks. For instance, significant changes are seen in the values of the metrics that characterize functionality, such as the number of classes metric, the different metrics that deal with the number of methods, those that characterize changes in the structure in MFC releases 1.0 to 4.0 and OWL releases 4.0 through 5.0. After the initial features releases, software is expected to mature, having incorporated most required capabilities. New versions of a mature software are generally released to make available bug fixes and improve robustness and reliability. These releases may also attempt to reduce the complexity of the software. New releases of mature software are characterized by small and less dramatic changes in the metric values. This trend is reflected in the values of the MFC metrics from versions 4.0 to 5.0.

Significant differences can be noted between the characteristics of the MFC and OWL from the metrics data in Table 5. As a rule, MFC does not use multiple inheritance, whereas the OWL systems depend significantly on the use of multiple inheritance. The values of the ADI, AWI, ANA, NOH, and NIC metrics in Table 5 indicate that MFC has a narrow, deeply nested inheritance structure, whereas OWL has a wide and shallow inheritance structure. The smallest value of the average depth of inheritance metric for the MFC frameworks is 1.68 in version 1.0, which is greater than the largest value of 1.4 for the ADI metric in OWL 5.0. The values of the NOM, NOD, and CSB metrics characterize MFC overall as a system with larger classes; that is, classes with a large number of methods and data attributes. The classes of the OWL system are smaller. The average number of methods in the latest release of MFC 5.0 is 144 per class, whereas there are only 56 methods per class in OWL 5.0. Also, the size in bytes of an average MFC 5.0 object is 85 bytes, where that of an OWL 5.0 object is 46 bytes. The MFC classes make significant use of virtual functions (polymorphism) as is indicated by the NOP metric value of 29 for MFC 5.0, compared to a value of 7 for OWL 5.0. Several other such comparisons can be drawn between the two systems based on the values of the metrics.

Conclusion

It is important to remember that a difference in metric values does not in itself make one system or product better than another. Metrics provide an objective means for tracking and determining types and ranges of empirical values which can influence the development of reusable, flexible or adaptable products. Armed with empirical values of metrics for successful projects, future developments can be guided by past experiences in ensuring the development of products that consistently meet quality objectives and goals.

The metrics provided by QMOOD++ can be used from design through maintenance. You can periodically evaluate software using the tool and metrics to ensure the development of products with the desired quality attribute measures based upon internal product characteristics.

DDJ


Copyright © 1997, Dr. Dobb's Journal


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 
Dr. Dobb's TV