Today, we announce the winners of this year's Jolt Awards in the Testing Tools category. As I am every year, I was impressed by the quality of implementation of most of these products (although, frankly, distressed to see how many of them are Windows-only solutions). Quality is doubly important in test tools (as it is, for example, in compilers and debuggers) because you don't want defects in those products to send testers and developers on wild-goose chases. In this regard, I have no qualms with the products that were nominated, nor with the winners this year. They're all excellent implementations of useful products.
White PapersMore >>
- How to Stop Web Application Attacks
- Big Data and Customer Interaction Analytics: How To Create An Innovative Customer Experience
What does vex me, though, is that the year-over-year advances are by and large unoriginal. They simply take existing concepts and apply them to new platforms. This year, we're talking about more mobile and more cloud. Terrific! Those platforms need testing tools. But maybe they need testing tools other than this familiar list of:
- Code inspection
- Unit testing
- UI testing
- Browser testing
- Load and performance testing
How about something really original, such as automated black box testing? Fuzz testing? Configuration testing? Security and penetration testing? Disaster recovery testing? Personally, I'd like to see new categories like privacy testing become part of the language. (When you turn off an app, does it remove from RAM any elements it decrypted during operation? Right now, that's an incredibly difficult thing to test, and none of the tools allow you to do this unless you write a special script manually and have them run it essentially turning them into expensive testing harnesses.)
Even more than the lack of support for important forms of testing is the true lack of automation. Automation, as defined in the testing tools industry, is not what it is elsewhere. It primarily means the ability to record and replay tests. This is not exactly a high bar to clear, as pretty much every product can serve as a harness and run tests when told to do so by a script or a continuous integration server. True automation comes on the front end, not the back end.
For example, when I'm testing a Web app, I should be able to run a module that automatically generates tests that check for security problems (and I mean the whole gamut: SQL injection, null-string attacks, CRLF attacks, oversized payloads, invalid REST commands, cookie tampering, multiple logins, buffer overflows, etc.). When I point the software at a function that takes an integer parameter, I'd like it to create tests that run the function with edge-case values (
1). If it takes strings, I want automated tests that test for null, empty strings, strings of 30,000 characters, strings containing only 0s, and strings with random values. If the app reads an input file, then I want tests that exercise how it handles a missing file, an empty file, a file bigger than 2GB, a file with only spaces in its name, a file with random values. If it's a text file, how does it handle foreign characters, LF vs CRLF, etc.?
I could do this forever. There are literally thousands and thousands of obvious fault conditions whose tests should be automated by test tools none of which are part of the current herd. I'd certainly also want what the herd presently does well with regard to these tests: capturing the exception, measuring execution time when unexpected input is used, and reporting the results in a nice interface with drill-down options. But the automation needs to happen up front so that I'm not writing millions of scripts to test all the different variations myself. This is the perfect thing for software to automate to remove the human bias that inevitably informs test scripts.
Software today is far more reliable than it was even 10 years ago, despite a steep run-up in complexity. However, it's still a long way from where it could be. Testing tools deserve part of the credit for the success so far, but also certainly part of the blame for not providing deeper and richer testing.