Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Tools

Examining Software Testing Tools


David is a graduate student and Peter an assistant professor in the school of computer science at Florida International University. They can be contacted at [email protected] and [email protected], respectively.


In a world where automatic updates and the latest service packs are quick fixes for common problems, the need for higher quality control in the software-development process is evident and the need for better testing tools to simplify and automate the testing process apparent. Why are so many developers seeking automated software testing tools? For one reason, an estimated 50 percent of the software development budget for some software projects is spent on testing [16]. Finding the right tool, however, can be a challenge. How is one tool measured against another? What testing features are important for a given software development environment? Does a testing tool really live up to its claims?

In this article, we analyze several software testing tools—JUnit, Jtest, Panorama for Java, csUnit, HarnessIt, and Clover.Net. The first three tools were developed to work with Java programs, while the latter three work on C# programs. We chose tools that ranged from freeware to commercial, and from basic to sophisticated. Our focus in selecting the tools—as well as the tests we performed—was on class-based testing. Our intention here is not to provide a comprehensive comparison of a set of testing tools, but to summarize the work completed in an independent study in an academic environment.

Software Testing Overview

Binder defines software testing as the execution of code using combinations of input and state selected to reveal bugs. Its role is limited purely to identifying bugs [2], not diagnosing or correcting them (debugging). The test input normally comes from test cases, which specify the state of the code being tested and its environment, the test inputs or conditions, and the expected results. A test suite is a collection of test cases, typically related by a testing goal or implementation dependency. A test run is an execution of a test suite with its results. Coverage is the percentage of elements required by a test strategy that have been traversed by a given test suite. Regression testing occurs when tests are rerun to ensure that the system does not regress after a change [3]. In other words, the system passes all the tests it did before the change.

Components usually need to interact with other components in a piece of software. As a result, it is common practice to create a partial component to mimic a required component. Two such instances of this are:

  • Test driver, a class or utility program that applies test cases to a component to be tested.
  • Test stub, a partial, temporary implementation of a component, which lets a dependent component be tested.

Additionally, a system of test drivers and other tools to support test execution is known as a "test harness," and an "oracle" is a mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test [8].

Class-based testing is the process of operating a class under specified conditions, observing or recording the results, and making an evaluation of some aspect of the class [4]. This definition is based on the IEEE/ANSI definition of software testing [5]. Class-based testing is comparable to unit testing used for procedural programs, where each class is tested individually against its specification. The two main types of class-based testing are specification based and implementation based.

Specification-based testing, also known as black box or functional testing, focuses on the input/output behavior or functionality of a component [3]. Some methods used for specification-based testing include equivalence, boundary, and state-based testing. In equivalence testing, values in the domain are partitioned into equivalence classes, in which any value tested should produce the same result as any other value in that class. Boundary testing focuses on testing the extreme input values, such as the minimum and maximum values along with other values within their proximity. Finally, state-based testing generates test cases from a finite state machine (a statechart, for instance), which models some functionality of the system [3, 7].

Implementation-based testing, also known as white box or structural testing, focuses purely on the internal structure of a coded component [3]. The most common implementation-based techniques are based on data flow and control flow analysis of code, which generate test cases based on some coverage criteria [2]. In data flow analysis, each variable definition is matched up with the places in code where the variable is used, constituting a def-use pair. These pairs are used for measuring coverage; for instance, "All-Uses" checks if there is a path from every def to each use of a variable [6]. Control flow testing analyzes coverage based on the order in which statements are executed in the code. For example, branch testing seeks to ensure every outcome of a conditional statement is executed during testing, and statement coverage verifies the number of statements executed.

While both specification-based and implementation-based testing have their advantages and disadvantages, it is generally accepted that some combination of the techniques is most effective [1].

Criteria for Comparison

Here are the criteria we used to evaluate the selected software testing tools:

Support for the testing process. How does a testing tool support the actual testing process? For each of the following items, we note whether they are supported by each of the testing tools. Possible result values for each criterion are: Y (for Yes), N (No), and P (Partial).

  1. Support for creation of stubs/drivers/harnesses.
    1. Does the testing tool provide a mechanism for generating stubs?
    2. Does the testing tool provide a mechanism for generating drivers?
    3. Does the testing tool provide a mechanism for generating test harnesses?
  2. Support for comparing test results with an oracle.
  3. Does the tool provide a mechanism for automatically comparing the results of the test against the expected results?
  4. Support for documenting test cases.
  5. Does the tool assist in keeping documentation of test cases?
  6. Support for regression testing.
  7. Does the tool provide the ability for automated regression testing?

Generalization of test information. What additional information can be provided from a testing tool, to provide further support for the testing process?

  1. Support for the creation of test cases.
    1. Does the tool provide help with creating test cases?
    2. Does the tool provide help with the generation of implementation-based test cases?
    3. Does the tool provide help with the generation of specification-based test cases?
  2. Support for structural coverage.
    1. Does the tool provide a measure of statement coverage?
    2. Does the tool provide measure of branch coverage?
    3. Does the tool provide a measure of all-uses coverage?

Usability of testing tool. How easily can a given tool be installed, configured, and used? As these criteria are more subjective, these factors are rated for each of the selected tools. Each element listed here is given a ranking from 1 to 5, where 5 is always the positive or more desired result, 3 is an average or ordinary result, and 1 indicates a poor or very negative result. For example, "Learning Curve" is based on the time needed to become familiar with the tool before being able to use it in practice, and a shorter amount of time results in a higher score.

  1. Ease of Use
    1. How easy is it to install the tool?
    2. How user friendly is the interface?
    3. What is the quality of online help?
    4. How helpful are the error messages?
  2. Learning Curve
  3. How long should it take an average programmer to be able to use the tool efficiently in practice?
  4. Support
    1. How quickly can you receive technical support information?
    2. How helpful is the technical support provided?

Requirements for testing tool. What requirements and capabilities of the tools help determine which one(s) might be suitable for your circumstances?

  1. Programming Language(s).
  2. What programming language(s) will the tool work with?
  3. Commercial Licensing.
  4. What licensing options are available and at what costs?
  5. Type of testing environment.
  6. Is the testing environment command prompt, Windows GUI, or part of the development environment?
  7. System Requirements.
    1. What operating systems will the tool run on?
    2. What software requirements does the tool necessitate?
    3. What hardware requirements does the tool impose?

Testing Tools Analyzed

JUnit is a freely available regression testing framework for testing Java programs, written by Erich Gamma and Kent Beck (http://www.junit.org/) [9]. It allows automated unit testing of Java code and provides output to both text-based and Windows-based user interfaces. To set up test cases, you write a class that inherits from a provided test case class, and code test cases in the format specified. The test cases can then be added to test suites where they can be run in batches. Once the test cases and test suites have been configured, they can be run again and again (regression testing) with ease. JUnit runs on any platform, needing just the Java runtime environment and SDK installed.

Jtest is a commercially available automated error-prevention tool from Parasoft (http://www.parasoft.com/). It automates Java unit testing using the JUnit framework, as well as checks code against various coding standards [11]. After a Java project is imported into Jtest, Jtest analyzes the classes, then generates and executes JUnit test cases. The generated test cases attempt to maximize coverage, expose uncaught runtime exceptions, and verify requirements. Additional test cases can be added to those produced by Jtest for more specific testing criteria. Setting Jtest apart from other testing tools is its ability to check and help correct code against numerous coding standards.

Created by International Software Automation (ISA), Panorama for Java is a fully integrated commercially available software engineering tool that assists you with software testing, quality assurance, maintenance, and reengineering [12] (http://www.softwareautomation.com/java/ index.htm). Panorama provides testing analysis, including code branch test coverage analysis, method and branch execution frequency analysis, test case efficiency analysis, and test case minimization. Panorama provides a GUI capture/playback tool to let you execute test cases manually using the application's interface and then later play them back again. In addition to its testing capabilities, Panorama generates various charts, diagrams, and documents, which help measure software quality and make complex programs easier to understand.

csUnit is a freely available unit testing framework for .NET, which allows testing of any .NET language conforming to the CLR, including C#, VB.Net, J#, and managed C++ [13] (http://www.csunit.org/). Test cases are written using the functionality provided by the framework, and the test cases and test suite are identified by adding attributes to the corresponding functions and class. To run the test cases, you launch csUnitRunner and point it at the project's binary file; then the GUI displays the results of the test.

HarnessIt, developed by United Binaries, is another commercial unit testing framework for .NET (http://www.unittesting.com/). Like csUnit, it supports any .NET [14] CLR-compliant language, and provides documentation on testing legacy code, such as unmanaged C++ classes. Test cases are written in the .NET language of choice using the provided framework and marked with the appropriate attribute. A GUI runs the test cases and displays the results. The interface can be run outside of Visual Studio or integrated to automatically run every time the program is debugged. While HarnessIt provides a similar functionality to csUnit and other unit-testing software, as a commercial tool it has superior documentation, features, and flexibility, and a full support staff for assistance.

Clover.Net is a code coverage analysis tool for .NET from Cenqua (http://www.cenqua.com/clover.net/). As such, it highlights code that isn't being adequately tested by unit test cases [15]. Clover.Net provides method, branch, and statement coverage for projects, namespaces, files, and classes, though it currently only supports the C# programming language. A Visual Studio .NET plug-in enables configuring, building, and displaying results of test coverage data. Clover can be used in conjunction with other unit testing tools, such as csUnit or HarnessIt, or with hand-written test code.

Comparative Study

The example class we selected to evaluate the testing tools was a stack, which was simple enough to make analyzing the tool output straightforward. At the same time, it had enough functionality to provide various places to "break" the code. We wrote the stack class in Java (Listing One) and C# (Listing Two), to accommodate testing tools of both languages. The code for the two languages was similar, differing only in which access modifiers were needed, case of built-in object operations, and the fact that the class was stored in a package in Java and a namespace in C#.

Each testing tool had its own unique features and functions for testing; nevertheless, all tools had in common the test case. The test case is the most basic unit involved in testing, yet all other aspects of testing depend on it. Some tools require you to write the code for the test cases, while other tools generate test cases for you automatically.

JUnit. To set up test cases in JUnit, you write a test class, which inherits from the JUnit TestCase class. Each function in the class represents a test case where the code creates one or more instances of the class to be tested and executes functions of the class using parameters as necessitated by the test case. Listing Three is a test case that tests the pop function of Stack. Here, two stacks are created: one to test with (stTested) and the other to hold the expected result (stExpected). After the object being tested has been exercised appropriately, one or more assert functions are executed to check if the expected result(s) matches the actual result(s). In this example, one assert tests that the correct value is being popped off, then another assert checks that the final object matches the expected object. The JUnit framework catches any exceptions that are thrown by the asserts and handles them accordingly.

Test cases are added to test suites where they can be executed in sets, or, if all test case names begin with "test," the test runner automatically creates a test suite with all the test cases and runs them. JUnit provides both text and GUI interfaces for the test runner. Configuration of the test cases takes time, but the core functionality of JUnit is regression testing, where a configured test case can be reexecuted any number of times.

Jtest. Jtest automates the test case generation process by analyzing each class and creating test cases for each function of the class. These test cases attempt to maximize coverage, expose uncaught runtime exceptions, and verify requirements by using the JUnit framework. Listing Four is a sample test case generated by Jtest where a test stack is created from an empty object array and an object is pushed onto it. The test case then verifies that the resulting Stack is not null. There are many similar white box test cases generated by Jtest, which attempt to execute as much of the code as possible; however, black box testing is not necessarily as automated.

For Jtest to automate black box testing, the specifications must be embedded into the code using Object Constraint Language (OCL), a language embedded into code as comments, which allows a class's constraints to be formally specified [3]. Otherwise, you can either modify the automatically generated test cases to test the specification, or write JUnit test cases manually outside of the Jtest-generated cases. Regression testing is performed by reusing code similar to JUnit.

There are many additional functionalities Jtest provides to enhance the testing process; for example, it automatically creates stub and driver code for running test cases. Also, Jtest measures code coverage to show completeness of test cases. Finally, it checks code against numerous coding standards and can automatically correct many such issues. A testing configuration is completely customizable to include or exclude any of the numerous features/subfeatures.

Panorama for Java. Creating unit test cases like those just described is not possible in Panorama. The way to create test cases in Panorama is to manually execute them in the application's interface while they are recorded. These test cases could later be replayed where Panorama would reexecute the same test cases using the application's interface, and thereby perform regression testing. However, this tests the user interface, not the class itself.

csUnit. In csUnit, test code is marked with attributes that allow the framework to identify the various test components. Test cases are placed in a test class, which is preceded by a [TextFixture] attribute, while the test cases themselves are preceded by a [Test] attribute. An Assert class is provided by the framework to validate whether a test case passed. Listing Five is the testPopEmpty test case. Here, Stack st is instantiated as an empty stack. An Assert.True is then used to test that an attempted pop would return null.

Test cases are run by the csUnitRunner application, which can be started from the Program menu under csUnit or by choosing csUnitRunner from the Tools menu in Visual Studio. In csUnitRunner, test cases are loaded by choosing File|Assembly|Add and pointing to the binary file produced by building the solution to test. Then, clicking the Run icon or choosing Test|Run All starts csUnitRunner, executing each test case and displaying the subsequent results.

HarnessIt. HarnessIt test cases, like csUnits, are identified with an attribute; only in this case, the attribute is [TestMethod], and the class containing it is noted with a [TestClass] attribute. The method of evaluating a test case, though, is different. Instead of using an Assert class to test validity, a TestMethodRecord type is provided, and is passed to a test case function. After the test case has been set up, the RunTest method of the TestMethodRecord type is called and passed a Boolean statement which can include unit test commands, comparison operators, and expected results. Listing Six is the test case for testPushFull, which fills Stack s1 with the maximum number of values. Stack s2 is instantiated as a duplicate of Stack s1. Then the code attempts to push another value onto s1. The TestMethodRecord verifies that s1 still equals s2, because no more values can be pushed onto a full stack.

Test files are run from the HarnessIt application, which can be opened from the HarnessIt option in the Program menu. Inside HarnessIt, choosing File|Add Assembly and pointing to the project's binary loads the project for testing. The test cases can then be run by choosing File|Run from the menu or pressing F5. A window pops up with an indicator bar showing progress as the tests are being run. The indicator bar remains green as long as tests pass, and turns red when they fail. Once finished, a report is generated, displaying all test cases in a hierarchical view with their test execution status.

Clover.Net. Clover.Net, as a coverage analysis tool, does not provide a mechanism for creating test cases, but rather, assumes test cases have already been coded. Clover.Net analyzes the test cases when they are executed, to produce coverage results. Clover.Net provides a Visual Studio plug-in, which lets you configure testing options as well as view coverage results directly in Visual Studio. Clover can even highlight the code that hasn't been tested, making it easy to find which areas were missed. In the Clover View, functions are marked green if they have been fully exercised, yellow if they have been partially exercised, and red if they have not been exercised at all. Statistics are given for the percentage of methods, branches and statements covered, and these statistics can be broken down from the solution level to project, class, or even function level.

Test Results

Each tool was examined according to the criteria presented here. Tables 1 through 4 detail the results of these evaluations, which correspond to the four sets of criteria. Tables 1 and 2 deal with features that a tool may or may not have. The result cells for each tool contain a Y (Yes, it has this feature), N (No, it doesn't support this), or P (Partial functionality of the feature available). Table 3 presents more subjective criteria, for which a score of 1 to 5 is assessed, where 5 = Excellent, 4 = Good, 3 = Average, 2 = Sub par, and 1 = Poor. Finally Table 4 provides additional information on requirements of the tools.

Support for the testing process. These criteria deal with the additional support a tool can provide to make the testing process easier (see Table 1). All of the tools provided a test harness for running the test cases, and the ability for regression testing. Many of the tools scored a "P" in the Creation of Drivers category because, though they provided a framework for creating drivers, Jtest was the only tool that actually automated this process, along with the creation of stubs. All the unit testing tools gave the ability to check results with a user supplied oracle. None of the tools provided help for documenting test cases, although some of the commercial tools may have an additional module for purchase to assist with this.

Generalization of test information. The criteria in this section deal with how well a tool supports the actual testing process, including automatic generation of test cases and various coverage measurements (Table 2). Not surprisingly, the freeware tools did not offer help in these areas, leaving it to you to perform these tasks manually. Jtest again scored well with the ability to automatically generate test cases, and provide branch and statement coverage results when running them. Panorama and Clover.Net both provided branch and statement coverage for test cases that were run.

Usability of testing tool. These criteria deal with ease of setup and use (Table 3). While the freeware tools scored well in these categories, bear in mind that they don't offer the functionality of the other tools. To substantiate the results and eliminate bias, the tools were distributed to a group of computer science students at Florida International University who ranked the tools on the preselected criteria.

Requirements of testing tool. Table 4 provides additional information on the requirements of each tool. In addition to the fact that several of the products can be used for multiple programming languages, others have similar products available for additional languages. The remaining requirements can help determine which tools would fit a particular environment, and whether hardware and software might need to be upgraded or purchased to use a given tool.

Analysis of Tools

JUnit provided a framework for creating, executing, and rerunning test cases. The documentation was easy to follow and gave thorough instructions on how to configure JUnit to work with existing code and set up test cases accordingly. The interface clearly showed which test cases failed and where, making it easy to find and correct problems. Once test cases had been set up, they could be executed over and over again for regression testing. Like other freely available tools, JUnit does not possess the automated aspects of tools, such as Jtest and Panorama.

JUnit provides a good starting point for testing Java programs. As a free program, there is an obvious cost benefit over more sophisticated tools. Additionally, the straightforwardness of setting up test cases makes the learning curve minimal. Many programmers find this an adequate testing tool; those working on larger applications would most likely seek more advanced tools.

Jtest took the JUnit framework and added automation to it. Jtest was a bit more difficult to set up than JUnit, and we had a hard time getting it configured properly with the documentation provided. On the other hand, Parasoft's technical support was very responsive, usually replying to e-mail within a day. We were given a direct number to a support agent, who remotely logged into our test computer and showed us how to configure Jtest to work with our code.

Jtest automated the process of creating test cases and stubs for our sample code, and provided an environment for running tests. Automation of white box test cases went seamlessly; however, for black box test cases, a little more work was required up front to insert OCL statements into the code. One advantage of doing so is that it produced code documentation along the way. In addition to testing functionality, the major selling point to this tool was the extensive ability to check code versus numerous different coding standards, and help bring the code into conformance.

Jtest is a full-featured advanced testing tool, and as such, comes with a price tag. It is intended for advanced users in a sophisticated software development environment. It provides lots of functionality to assist testers and programmers in developing quality products.

Panorama for Java was simple to install and go through using the sample program provided by Panorama. However, we were unable to get Panorama to run test cases for our sample class. This was because Panorama does not actually generate test cases, or provide a framework for creating them. What it does provide is the ability to record test cases that are manually executed, so that they can be played over again. This, however, is really used for testing the user interface, where we were looking to run test cases against the class itself in the code.

Panorama does provide sophisticated reports and graphs for measuring testing and the code itself. Perhaps in the future, Panorama will incorporate more capabilities for unit testing into its software suite.

csUnit provided a unit testing framework for .NET, in our case, a C# testing solution. The documentation was simple and easy to follow, though not overly extensive. The csUnitRunner interface was well designed with a hierarchical view of the test cases available to navigate, and the standard color-coded results indicating if a test passed or failed. Regression testing was achieved in the same manner as the other unit testing tools.

As a freeware tool, csUnit is appealing for budget-conscious and novice testers. Its simplicity is another quality to make it a good starting point for novices, as well as an acceptable test tool for time-constrained developers.

HarnessIt offered a testing product with similar functionality to csUnit at a nominal price. It supplied a full help menu which specified the basics of how to test code with the provided framework, as well as more advanced topics, such as using HarnessIt to test legacy code. HarnessIt provided the standard unit testing functionality, and displayed test results in an easy to follow manner. HarnessIt can be integrated into an application such that whenever you run the application in debug mode, it automatically launches and runs the test cases.

Overall, we found that HarnessIt did not provide much more functionality than csUnit; however, this does not necessarily mean it is not worth the price. What you are paying for is organized, complete help documentation, as well as knowledgeable support staff to help you in getting the tool working with your code.

We were able to integrate the Clover.Net tool into Visual Studio without much hassle by following the instructions in the Clover.Net manual. The manual was well laid out and contained all the necessary documentation for working with Clover. The Clover View provided coverage results within Visual Studio making it simple to find which areas of the code needed further testing. Clover.Net does not include a framework for creating test cases, but works in conjunction with unit testing tools such as the those mentioned previously.

As a commercial tool, Clover.Net offers significant additional functionalities over the available freeware tools by providing coverage analysis. Additionally, the price tag isn't much higher than HarnessIt. This tool would be ideal for a small to medium development environment that does not have the necessary budget or technical need of a tool on Parasoft's level.

Conclusion

We have examined here just a few of the available testing tools.

As the functionality of software expands, and the underlying code becomes more complex, the often neglected area of software testing will be called on more than ever to ensure reliability.

Acknowledgments

Thanks to the students in the Software Testing class at Florida International University, Fall 2004, for their participation in evaluating the tools. The members of the Software Testing Research Group also provided valuable comments that were incorporated in the article.

References

  1. [1] J.D. McGregor and D.A. Sykes. A Practical Guide To Testing Object-Oriented Software. Addison-Wesley, 2001.
  2. [2] R.V. Binder. Testing Object-Oriented Systems. Addison-Wesley, 2000.
  3. [3] B. Bruegge and A.H. Dutoit. Object-Oriented Software Engineering Using UML, Patterns, and Java. Pearson Prentice Hall, 2004.
  4. [4] P.J. Clarke. "A Taxonomy of Classes to Support Integration Testing and the Mapping of Implementation-Based Testing Techniques to Classes." Ph.D. Thesis for Clemson University, August 2003.
  5. [5] IEEE/ANSI Standards Committee, 1990. Std 610.12.1990.
  6. [6] M.J. Harrold and G. Rothermel. "Performing Data Flow Testing on Classes," Proceedings of the 2nd ACM SIGSOFT Symposium on Foundations of Software Engineering, pages 154-163. ACM, December 1994.
  7. [7] D. Kung, Y. Lu, N. Venugopalan, P. Hsia, Y.Y. Toyoshima, C Chen, and J. Gao. "Object State Testing and Fault Analysis for Reliable Software Systems," Proceedings of the 7th International Symposium on Reliability Engineering, pages 239-242. IEEE, August 1996.
  8. [8] BCS SIGIST. "Glossary of Terms used in Software Testing," http://www .testingstandards.co.uk/Gloss6_2.htm, June 2004.
  9. [9] E. Gamma and K. Beck. JUnit. http://www.junit.org, June 2004.
  10. [10] Parasoft. Automating and Improving Java Unit Testing: Using Jtest with J-Unit. http://www.ddj.com/documents/ s=1566/ddj1027024311022/parasoft.pdf.
  11. [11] Parasoft. Jtest. http://www.parasoft.com, July 2004.
  12. [12] International Software Association. Panorama for Java. http://www .softwareautomation.com/java/index.htm, July 2004.
  13. [13] Manfred Lange. csUnit. http://www.csunit.org, August 2004.
  14. [14] United Binary, LLC. HarnessIt. http://www.unittesting.com/, August 2004.
  15. [15] Cenqua. Clover.Net. http://www .cenqua.com/clover.net/, August 2004.
  16. [16] I. Sommerville. Software Engineering. Addison-Wesley, 2004.

DDJ



Listing One

//Simple stack implementation - Java
public class Stack
{
    protected static final int STACK_EMPTY = -1;
    protected static final int STACK_MAX = 1000;
    Object[] stackelements;
    int topelement = STACK_EMPTY;
    
    //create empty stack
    Stack()
    {
        stackelements = new Object[STACK_MAX];
    }
    //create stack with the objects stored in the array from bottom up
    Stack(Object[] o)
    {
        stackelements = new Object[STACK_MAX];
        for(int i=0;i<o. length;i++)
        {
            push(o[i]);
        }
    }
    //create stack as a duplicate of given stack
    Stack(Stack s)
    {
        stackelements = s. stackelements;
        topelement = s. topelement;
    }
    //push object onto the stack
    void push(Object o)
    {
        if(!isFull())
        {
            stackelements[++topelement] = o;
        }
    }
    //pop element off the stack
    Object pop()
    {
        if(isEmpty())
            return null;
       else return stackelements[topelement--];    
    }
//return true if stack is empty
    boolean isEmpty()
    {
        if(topelement == STACK_EMPTY)
            return true;
        else return false;
    }
//return true if stack is full
    boolean isFull()
    {
        if(topelement == STACK_MAX-1)
            return true;
        else return false;
    }
}
Back to article


Listing Two
//Simple stack implementation - C#
public class Stack
{
    public const int STACK_EMPTY = -1;
    public const int STACK_MAX = 100;
    private Object[] stackelements;
    private int topelement = STACK_EMPTY;

    //create empty stack
    public Stack()
    {
        stackelements = new Object[STACK_MAX];
    }
    //create stack with the objects stored in the array from bottom up
    public Stack(Object[] o)
    {
        stackelements = new Object[STACK_MAX];
        topelement = o. Length-1;
        for(int i=0;i<o. Length;i++)
        {
            stackelements[i] = o[i];
        }
    }
//create stack as a duplicate of given stack
    public Stack(Stack s)
    {
        stackelements = s. stackelements;
        topelement = s. topelement;
    }
//push object onto the stack
    public void push(Object o)
    {
        if(!isFull())
        {
            stackelements[++topelement] = o;
        }
    }
//pop element off the stack
    public Object pop()
    {
        if(isEmpty())
            return null;
        else return stackelements[topelement--];    
    }
    //return true if stack is empty
    private bool isEmpty()
    {
        if(topelement == STACK_EMPTY)
            return true;
        else return false;
    }
//return true if stack is full
    private bool isFull()
    {
        if(topelement == STACK_MAX-1)
            return true;
        else return false;
    }   
}
Back to article


Listing Three
public void testSimplePop()
{
    String strA = "a";
    String[] st = new String[1];
    st[0] = strA;
    Stack stTested = new Stack(st);
    Stack stExpected = new Stack();
    Assert.assertTrue(strA. equals(stTested. pop()));
    Assert.assertTrue(stExpected. equals(stTested));
}   
Back to article


Listing Four
/** Test for method: push(Object)
 * @see Stack#push(Object)
 * @author Jtest
 */
public void testPush2() {
    Object t0 = new Object();
    Object[] var0 = new Object[] {
    };
    Stack THIS = new Stack(var0);
    // jtest_tested_method
    THIS.push(t0);
    boolean var1 = THIS.equals((Stack) null);
    assertEquals(false, var1); // jtest_unverified
}   
Back to article


Listing Five
[Test]
public void testPopEmpty() 
{
    Stack st = new Stack();
    Assert.True(st.pop()==null);
}
Back to article


Listing Six
[TestMethod]
public void testPushFull(TestMethodRecord tmr)
{
    String strA = "a";
    Stack s1 = new Stack();
    for(int i=0;i<Stack. STACK_MAX;i++)
    {
        s1.push(strA + Convert. ToString(i));
    }
    Stack s2 = new Stack(s1);
    s1.push(strA);

    tmr. RunTest(s1.equals(s2),"Testing push to a full Stack");
}
Back to article


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.