Channels ▼
RSS

Why Aren't There Better Testing Tools?


Today, we announce the winners of this year's Jolt Awards in the Testing Tools category. As I am every year, I was impressed by the quality of implementation of most of these products (although, frankly, distressed to see how many of them are Windows-only solutions). Quality is doubly important in test tools (as it is, for example, in compilers and debuggers) because you don't want defects in those products to send testers and developers on wild-goose chases. In this regard, I have no qualms with the products that were nominated, nor with the winners this year. They're all excellent implementations of useful products.

More Insights

White Papers

More >>

Reports

More >>

Webcasts

More >>

What does vex me, though, is that the year-over-year advances are by and large unoriginal. They simply take existing concepts and apply them to new platforms. This year, we're talking about more mobile and more cloud. Terrific! Those platforms need testing tools. But maybe they need testing tools other than this familiar list of:

  • Code inspection
  • Unit testing
  • UI testing
  • Browser testing
  • Load and performance testing

How about something really original, such as automated black box testing? Fuzz testing? Configuration testing? Security and penetration testing? Disaster recovery testing? Personally, I'd like to see new categories like privacy testing become part of the language. (When you turn off an app, does it remove from RAM any elements it decrypted during operation? Right now, that's an incredibly difficult thing to test, and none of the tools allow you to do this unless you write a special script manually and have them run it — essentially turning them into expensive testing harnesses.)

Even more than the lack of support for important forms of testing is the true lack of automation. Automation, as defined in the testing tools industry, is not what it is elsewhere. It primarily means the ability to record and replay tests. This is not exactly a high bar to clear, as pretty much every product can serve as a harness and run tests when told to do so by a script or a continuous integration server. True automation comes on the front end, not the back end.

For example, when I'm testing a Web app, I should be able to run a module that automatically generates tests that check for security problems (and I mean the whole gamut: SQL injection, null-string attacks, CRLF attacks, oversized payloads, invalid REST commands, cookie tampering, multiple logins, buffer overflows, etc.). When I point the software at a function that takes an integer parameter, I'd like it to create tests that run the function with edge-case values ( INT_MIN, INT_MAX, -1, 0, 1). If it takes strings, I want automated tests that test for null, empty strings,  strings of 30,000 characters, strings containing only 0s, and strings with random values. If the app reads an input file, then I want tests that exercise how it handles a missing file, an empty file, a file bigger than 2GB, a file with only spaces in its name, a file with random values. If it's a text file, how does it handle foreign characters, LF vs CRLF, etc.?

I could do this forever. There are literally thousands and thousands of obvious fault conditions whose tests should be automated by test tools — none of which are part of the current herd. I'd certainly also want what the herd presently does well with regard to these tests: capturing the exception, measuring execution time when unexpected input is used, and reporting the results in a nice interface with drill-down options. But the automation needs to happen up front so that I'm not writing millions of scripts to test all the different variations myself. This is the perfect thing for software to automate to remove the human bias that inevitably informs test scripts.

Software today is far more reliable than it was even 10 years ago, despite a steep run-up in complexity. However, it's still a long way from where it could be. Testing tools deserve part of the credit for the success so far, but also certainly part of the blame for not providing deeper and richer testing.

— Andrew Binstock
Editor in Chief
alb@drdobbs.com
Twitter: platypusguy


Related Reading






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 


Video