I'm writing again about one of my biggest concerns right now: how to enhance automated testing effectively by using manual tests.
Automated testing first
As I mentioned in a previous post we have followed a supposedly uncommon path while developing our test system for Plastic SCM. We started creating a wide set of automated tests based on the following technologies:
- Automated QA's TestComplete: which we're extensively using for GUI test automation on Windows (actually on a combination of Windows systems, thanks to VMWare). We wrote a whitepaper about functional testing with agile.
- NUnit, the .NET xNUnit implementation. We extended it introducing PNUnit (P for parallel, although maybe distributed would fit better) which has been published together with NUnit 2.5, and it's the basis of our core testing system. While we do regular unit testing of classes and methods with NUnit, PNUnit let us test our software automating the command line, starting up different servers, on different machines, making the clients wait for the servers to be started, creating more complex scenarios synchronizing clients (on different machines), creating load test scenarios distributed on hundreds of machines and so on. PNUnit also let's you split a huge NUnit (or PNUnit) test suite in chunks and run the tests in parallel on different machines (or taking advantage of multi-core CPUs) reducing the overall test execution time.
- VMWare: the basis to keep the whole release testing mechanism under control. We run the test suite under Windows 2000, 2003, XP, XP SP2, Vista, 2008 and soon Windows 7. Then we also run using SQL Server and MySQL as database backend (Firebird is the standard one). And repeat again using LDAP authentication mode, authentication out of Windows domain (Active Directory) and inside a Windows domain, also Sun One LDAP. And finally the Linux boxes: Fedora, Ubuntu, Suse... The only three ones that run outside VMWare are: Mac OS on PPC hardware, MacOS on intel x86 and Solaris 10 x86. Simply put: it wouldn't be possible without VMWare.
Automated is not enough
Well, maybe a 100% complete test suite will do, but I bet you'll never have the time to cover all the possible combinations of a medium sized (needless to say larger ones will be even harder) software package.
So, after all the time spent on all the combinations and automated tests described, there are a number of issues that are better detected manually. The reasons can be:
- Some errors are better detected by humans: things like whether a label overlaps a text box under a certain OS or screen resolution, takes a few seconds for a human tester, but it's hard to automate.
- Usability issues: when you develop software you tend to think the user will known as much as you do about your package, but this is not true. So sometimes things you consider just obvious will be hard to understand by the average user. A human tester (not contaminated by the dev team) will find the issues. Usability testing is a whole different world, so I'm not going to be talking about it.
- Negative conditions, random usage, degenerative scenarios: what if you enter a totally wrong number or if you click at an unexpected location? Certainly it can be automated too, but most of the time automated tests (in our case I'd say especially GUI ones) try to check how the system should work when the right input is introduced, but fail to check alternative or incorrect usage paths. What if you click too fast while a view is still being loaded? Definitely these cases can be automated, but if you don't have them yet, go manual in the meantime.
Let's test manually
Let me explain: most of the teams I know face the following challenge: they test manually and they'd like to automate. So, when I started looking for manual testing documentation, I really wanted to avoid this: do manually what can be automated. For most of the teams I found automation was their future mainly because they lacked the resources, the tools or the time to automate their test cases.
Right now we can test a release in less than five hours using our automated test suite (and making extensive use of parallelization). I know it would be simply impossible doing manual testing: it would take weeks.
But at the end of the day, public release after public release, we used to spent weeks testing manually in order to find the potential issues not covered by the (hopefully always growing) test suite.
How could this be improved? I mean, a didn't need a technique or a set of tools to write test cases which could be fully automated, what I needed was a set of procedures/rules/best practices to better use the time we were going to spend doing manual testing.
So I started reading Agile Testing: A Practical Guide for Testers and Agile Teams. It covers a wide number of interesting topics, specially focusing on how to integrate testing (previously considered as a process centric or rigid way of working) into agile teams.
But then at some point the book mentions exploratory testing.
Enter exploratory testing
What is exploratory testing all about? Simply put: traditional random testing structured. Let me explain: it gives some rules so you can organize a manual testing process which basically consists on letting the testers arbitrarily test the software instead of constraining them with a script (which could be automated). For me, exploratory testing is the same for testing as agile methods to coding: it can look as code and fix, but there's some order which prevents the chaos to happen.
So exploratory testing seemed to be the perfect complement for our automated testing process: it doesn't constrain testers giving them a fixed set of rules but lets them play using their intelligence on a defined environment.
Exploratory testing explained
Exploratory testing is based on well planned and time constrained sessions. The team plans in advanced what needs to be tested and writes a set of charters or test goals. They define what they need to test but not how to test it (you can always give some ideas to help, but you can't create a script).
There are two important topics here:
- Tests are planned: which means you get rid of chaotic random testing.
- Test sessions are time boxed: you limit the time to test a given functionality or a part of your software, so you clearly state results should come at a fast pace. Of course really difficult scenarios won't be so easy to constrain, but normally you can expect results to come during the first minutes/hour of testing, which will also help keeping people concentrated on what they're doing.
It reminds me SCRUM.
Each tester takes a charter and starts testing the assigned part of software, creating a log called testing session in which he'll write how long did it take to set up the test, how long he was actually testing, writing documentation and also exploring unrelated but valuable pieces of software (you start with something and continue with something else you suspect that can fail).
The testing session will record all the issues (minor problems) and bugs detected.
So, there's probably nothing special about exploratory (my explanation is very simplistic too) but it helps to organize a whole testing process: it tells you what you should plan and how (constrained times, normally no more than 2 hours per session), and also what results to expect and how to record them.
We record all the test sessions at our internal wiki, and link each issue/bug to our bug tracking system, setting a specific detection method so we can later on determine how many issues and bugs were detected using exploratory testing. Linking from the wiki to the issue tracking and back allows a better navigation.
We've been using exploratory testing for the last three sprints only, but so far results are really promising. We needed a technique capable of taking the best out of the testers but not simply converting them into automated tests replacements.
So far exploratory has been easy to adopt and start up and we're using it to test our upcoming Plastic SCM release. In the meantime, of course, the automated test suite continues growing, but manual tests help us finding issues (and even defining new automated tests to cover the problems detected) and organized a previously chaotic activity.