Recruiting Software Testers, Part 2



January 01, 2000
URL:http://www.drdobbs.com/recruiting-software-testers-part-2/184414561

January 2000: Career and Training: Recruiting Software Testers, Part 2

One of the most difficult functions any manager has is selecting good staff. Decisions made in the hiring process ultimately will make or break the mission of the group–and, in the long run, the company.

Last month’s article discussed several fundamental factors to consider when seeking potential software testers. After initially defining staffing needs, a manager must establish requirements for the job, examine the motivations of people wanting to get into software testing, and gather information about–and phone screen–job candidates.

Ultimately, though, staffing decisions usually come down to the results of a rigorous interview process. How does the candidate approach testing, and how deep is her knowledge of the field? Does he have project-management experience? How does she relate to her peers, supervisors and staff? Are his bug reports comprehensive and insightful, or terse and ungrammatical? How well does she perform on tests and puzzles specially designed for candidates? These are the key questions that will separate the qualified from the unqualified.

Testing Philosophy

Once I’ve done my homework on the résumé and ascertained the basics about the candidate’s education and past employment, I delve into his testing knowledge and philosophy. For supervisory or senior positions, I ask the following questions:

I’m not looking for the one right answer about how testing should be done. I simply want to know if the candidate has thought about these issues in depth, and whether his views are roughly compatible with the company’s.

These questions, for example, are designed for a company that focuses on testing with little regard for process standards. Therefore, the candidate’s answers should assure me that he would be comfortable working in a group that doesn’t follow process standards such as ISO 9000-3 or the Capability Maturity Model.

Technical Breadth

After covering philosophy and knowledge, I evaluate the candidate’s technical breadth. Though the actual questions depend on the particular company and application area, the following elicit the many facets of an interviewee’s experience:

The answer to "Should every business test its software the same way?" indicates a candidate’s open-mindedness and breadth of exposure to the field. I believe the correct answer is no, and I expect to hear that more rigorous testing and process management should be applied to life-critical applications, than the here-today, new-version-tomorrow web-based application.

A candidate also should believe that different application issues call for different approaches. For example, testing a financial application that is written in COBOL and works with a huge database would require different techniques than those used to test the interactive competence of a word processor. Also, an exceptional candidate should discuss the different paradigms of software testing, or how different people view core issues in the field.

Within the black-box world, for instance, James Bach identifies domain testing, stress testing, flow testing, user testing, regression testing, risk-based testing and claim-based testing as separate techniques (Tripos: A Model to Support Heuristic Software Testing, 1997, available at http://www.stlabs.com/newsletters/testnet/docs/tripover.htm). However, in my course on testing, I identify nine paradigms that aid testers in determining the different criteria that create effective test cases or suites: domain testing, stress testing, risk-based testing, random testing, specification-driven testing, function testing, scenario-driven testing, user testing and security testing. A candidate shouldn’t believe that there is one correct partitioning of paradigms but should recognize that different groups with different approaches to testing can both be right.

When I interview senior candidates, I want to find out their opinions on common testing issues and hear a description and evaluation of the tools they’ve used. I’m not looking for agreement. Rather, I want to determine whether the candidate has a well developed, sophisticated point of view. The data-oriented questions, for example, would be excellent for probing a candidate’s sophistication in the testing of databases and test tools used. Of course, the questions need to be changed to match a candidate’s skill set and the class of application. There would be little value in asking a highly skilled tester or test manager for interactive applications such as games or word processors about databases and their test tools.

Project Management

As a matter of course, supervisory candidates must be queried on their personnel and project management philosophy. However, I also do the same for potential mid-level or senior testers. At some point in seniority, a tester becomes largely self-managing, assigned to a large area of work and left alone to plan the size, type and sequence of tasks within it. Peter Drucker (The Effective Executive, HarperCollins, 1966) defines any knowledge worker who has to manage his or her own time and resources as an executive. I’ve personally found understanding the managerial nature of my mid-level contributors a great insight.

Here, then, are some questions for supervisors or self-managers:

Staff Relations

This series is, again, primarily for supervisory staff. However, I ask the test-group manager questions, which are quite enlightening, of managerial candidates.

If a candidate’s picture of the ideal manager is dramatically different from my impression of him or from his image of himself, I would need to determine whether this difference is a red flag or a reflection of genuine humility. On the other hand, I would not immediately assume that a candidate whose description exactly matches his perception and presentation of himself is pathologically egotistical. It’s possible he’s just trying to put himself in a good light during the interview. This is fine, as long as he doesn’t exaggerate or lie. Finally, if a candidate’s description of the ideal manager differs fundamentally from the expectations of the company, then I wonder whether this person could fit in with the company culture.

Tests and Puzzles

Some managers use logic or numeric puzzles as informal aptitude tests. I don’t object to this practice, but I don’t believe such tests are as informative as they are thought to be. First, there are huge practice effects with logic and numeric puzzles. I had my daughter work logic puzzles when she was in her early teens, and she became quite good at solving them.

Her success didn’t mean she was getting smarter, however. She was simply better at solving puzzles. In fact, practice efforts are long lasting and more pronounced in speeded, nonverbal and performance tests (Jepson, A.R., Bias in Mental Testing, The Free Press, 1980). Second, speed tests select for mental rabbits, those who demonstrate quick–but not necessarily thorough–thinking. Tortoises sometimes design better products or strategies for testing products.

A Simple Testing Puzzle

An old favorite among commonly used speed tests is G.J. Myers’ self-assessment (The Art of Software Testing, John Wiley & Sons, 1979). The candidate is given an extremely simple program and asked to generate a list of interesting test cases. The specific program involves an abstraction (a triangle). I prefer this puzzle because it tests something testers will actually do–analyze a program and figure out ways to test it. However, there will still be practice effects. Average testers who worked through Myers before will probably get better results than strong testers who have never seen the puzzle. Additionally, I suspect that cultural differences also will produce different levels of success, even among skilled testers. Someone who deals with abstractions, such as geometric abstractions or with the logical relationships among numbers, has an advantage over someone who tests user interfaces or a product’s compatibility with devices.

Bug Reports

Writing a bug report is one of the most basic and important parts of a tester’s job. Nonetheless, there is a lot of variation in the quality of bug reports, even among those written by testers who have several years’ experience.

To test this ability, find a reasonably straightforward bug in part of your software that is fairly easy to understand and have the candidate write a report. If none of your product’s bugs fit the bill, www.bugnet.com can provide one. It’s easy to recognize an excellent bug report; however, having sample reports from your staff can help you determine the quality of the attempt. There are many other good puzzles and tests in use, and all are based on a common premise: If you can find a way to present a portion of the job that the tester will actually do, you can see how well the tester does it. You have to make the test fair by designing it so that someone who doesn’t know your product can still do well. That’s challenging. But if you can come up with a fair test, the behavior that you elicit will be very informative.

Having comprehensively questioned and tested all your promising candidates, you’ll have ample data with which to make your decision–and choose a winner.

Another Simple Testing Test

Draw a simple Open File dialog box on a whiteboard, explaining "This is an Open File dialog. You can type in the file name (where it says File4 at the bottom), or you can click on the open button to open it." Hand the marker to the candidate and ask her how she would test the dialog. Make it clear that she can have as much time as she wants, can make notes on the whiteboard or on paper, and that many candidates take several minutes to think before they say anything. When the candidate begins presenting her thoughts, listen. Ask questions to clarify, but don’t criticize or challenge her. When the tester pauses, let her be silent. She can answer when she’s ready.

This is a remarkable test in the extent to which answers can vary. One candidate might stay at the surface, pointing out every flaw in the design of the dialog box: There is no cancel button or dialog title, no obvious way to switch between directories, and so on. Another candidate might skip the user-interface issues altogether and try testing the opening of large and small files, corrupt files, files with inappropriate extensions or remote files (specified by paths that she types into the file name box, such as d:\user\remote\File4).

Because of the variation in responses, over time I’ve changed how I use this test. Now, I present the dialog as before, giving the candidate a marker and whatever time she needs. I compliment the analysis after she has finished her comments, regardless of her performance. Then, I show her the types of tests that she missed. I explain that no one ever finds all the tests and that sometimes people miss issues because they are nervous or think they have been rushed. I spend most of the time showing and discussing different types of tests and the kind of bugs they can find. Finally, I erase the whiteboard, draw a Save File dialog that is just as poorly designed, and ask the tester to try again.

Because the differential practice effects are minimized by the initial practice test and coaching, the real test is the second one. The candidate receives feedback and is reassured that she isn’t a dolt. In fact, most testers are substantially less nervous the second time through.

The second test allows me to find out if this candidate will be responsive to my style of training. Did the candidate understand my explanations and do a substantially better job in her next attempt? If the answer is yes, I have a reasonable candidate (as measured by this test). If the candidate’s second analysis wasn’t much better than the first, she is unlikely to be hired. She might be a bright, well-intentioned, interesting person, but if she doesn’t learn when I teach, she needs a different teacher.

Occasionally, I have dispensed with the second test because the candidate did impossibly poorly during the first test or was extremely defensive or argumentative during my explanation of alternative tests. This usually means that I’m finished with the candidate. I’ll spend a little time looking for a polite way to wrap up the interview, but I won’t hire him.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.