One of the most difficult functions any manager has is selecting good staff. Decisions made in the hiring process ultimately will make or break the mission of the groupand, in the long run, the company.
Last months article discussed several fundamental factors to consider when seeking potential software testers. After initially defining staffing needs, a manager must establish requirements for the job, examine the motivations of people wanting to get into software testing, and gather information aboutand phone screenjob candidates.
Ultimately, though, staffing decisions usually come down to the results of a rigorous interview process. How does the candidate approach testing, and how deep is her knowledge of the field? Does he have project-management experience? How does she relate to her peers, supervisors and staff? Are his bug reports comprehensive and insightful, or terse and ungrammatical? How well does she perform on tests and puzzles specially designed for candidates? These are the key questions that will separate the qualified from the unqualified.
Once Ive done my homework on the résumé and ascertained the basics about the candidates education and past employment, I delve into his testing knowledge and philosophy. For supervisory or senior positions, I ask the following questions:
- What is software quality assurance?
- What is the value of a testing group? How do you justify your work and budget?
- What is the role of the test group vis-à-vis documentation, tech support, and so forth?
- How much interaction with users should testers have, and why?
- How should you learn about problems discovered in the field, and what should you learn from those problems?
- What are the roles of glass-box and black-box testing tools?
- What issues come up in test automation, and how do you manage them?
- What development model should programmers and the test group use?
- How do you get programmers to build testability support into their code?
- What is the role of a bug tracking system?
Im not looking for the one right answer about how testing should be done. I simply want to know if the candidate has thought about these issues in depth, and whether his views are roughly compatible with the companys.
These questions, for example, are designed for a company that focuses on testing with little regard for process standards. Therefore, the candidates answers should assure me that he would be comfortable working in a group that doesnt follow process standards such as ISO 9000-3 or the Capability Maturity Model.
After covering philosophy and knowledge, I evaluate the candidates technical breadth. Though the actual questions depend on the particular company and application area, the following elicit the many facets of an interviewees experience:
- What are the key challenges of testing?
- Have you ever completely tested any part of a product? How?
- Have you done exploratory or specification-driven testing?
- Should every business test its software the same way?
- Discuss the economics of automation and the role of metrics in testing.
- Describe components of a typical test plan, such as tools for interactive products and for database products, as well as cause-and-effect graphs and data-flow diagrams.
- When have you had to focus on data integrity?
- What are some of the typical bugs you encountered in your last assignment?
The answer to "Should every business test its software the same way?" indicates a candidates open-mindedness and breadth of exposure to the field. I believe the correct answer is no, and I expect to hear that more rigorous testing and process management should be applied to life-critical applications, than the here-today, new-version-tomorrow web-based application.
A candidate also should believe that different application issues call for different approaches. For example, testing a financial application that is written in COBOL and works with a huge database would require different techniques than those used to test the interactive competence of a word processor. Also, an exceptional candidate should discuss the different paradigms of software testing, or how different people view core issues in the field.
Within the black-box world, for instance, James Bach identifies domain testing, stress testing, flow testing, user testing, regression testing, risk-based testing and claim-based testing as separate techniques (Tripos: A Model to Support Heuristic Software Testing, 1997, available at http://www.stlabs.com/newsletters/testnet/docs/tripover.htm). However, in my course on testing, I identify nine paradigms that aid testers in determining the different criteria that create effective test cases or suites: domain testing, stress testing, risk-based testing, random testing, specification-driven testing, function testing, scenario-driven testing, user testing and security testing. A candidate shouldnt believe that there is one correct partitioning of paradigms but should recognize that different groups with different approaches to testing can both be right.
When I interview senior candidates, I want to find out their opinions on common testing issues and hear a description and evaluation of the tools theyve used. Im not looking for agreement. Rather, I want to determine whether the candidate has a well developed, sophisticated point of view. The data-oriented questions, for example, would be excellent for probing a candidates sophistication in the testing of databases and test tools used. Of course, the questions need to be changed to match a candidates skill set and the class of application. There would be little value in asking a highly skilled tester or test manager for interactive applications such as games or word processors about databases and their test tools.
As a matter of course, supervisory candidates must be queried on their personnel and project management philosophy. However, I also do the same for potential mid-level or senior testers. At some point in seniority, a tester becomes largely self-managing, assigned to a large area of work and left alone to plan the size, type and sequence of tasks within it. Peter Drucker (The Effective Executive, HarperCollins, 1966) defines any knowledge worker who has to manage his or her own time and resources as an executive. Ive personally found understanding the managerial nature of my mid-level contributors a great insight.
Here, then, are some questions for supervisors or self-managers:
- How do you prioritize testing tasks within a project?
- How do you develop a test plan and schedule? Describe bottom-up and top-down approaches.
- When should you begin test planning?
- When should you begin testing?
- Do you know of metrics that help you estimate the size of the testing effort?
- How do you scope out the size of the testing effort?
- How many hours a week should a tester work?
- How should your staff be managed? How about your overtime?
- How do you estimate staff requirements?
- What do you do (with the project tasks) when the schedule fails?
- How do you handle conflict with programmers?
- How do you know when the product is tested well enough?
This series is, again, primarily for supervisory staff. However, I ask the test-group manager questions, which are quite enlightening, of managerial candidates.
- What characteristics would you seek in a candidate for test-group manager?
- What do you think the role of test-group manager should be? Relative to senior management? Relative to other technical groups in the company? Relative to your staff?
- How do your characteristics compare to the profile of the ideal manager that you just described?
- How does your preferred work style work with the ideal test-manager role that you just described? What is different between the way you work and the role you described?
- Who should you hire in a testing group and why?
- What is the role of metrics in comparing staff performance in human resources management?
- How do you estimate staff requirements?
- What do you do (with the project staff) when the schedule fails?
- Describe some staff conflicts youve handled.
If a candidates picture of the ideal manager is dramatically different from my impression of him or from his image of himself, I would need to determine whether this difference is a red flag or a reflection of genuine humility. On the other hand, I would not immediately assume that a candidate whose description exactly matches his perception and presentation of himself is pathologically egotistical. Its possible hes just trying to put himself in a good light during the interview. This is fine, as long as he doesnt exaggerate or lie. Finally, if a candidates description of the ideal manager differs fundamentally from the expectations of the company, then I wonder whether this person could fit in with the company culture.
Tests and Puzzles
Some managers use logic or numeric puzzles as informal aptitude tests. I dont object to this practice, but I dont believe such tests are as informative as they are thought to be. First, there are huge practice effects with logic and numeric puzzles. I had my daughter work logic puzzles when she was in her early teens, and she became quite good at solving them.
Her success didnt mean she was getting smarter, however. She was simply better at solving puzzles. In fact, practice efforts are long lasting and more pronounced in speeded, nonverbal and performance tests (Jepson, A.R., Bias in Mental Testing, The Free Press, 1980). Second, speed tests select for mental rabbits, those who demonstrate quickbut not necessarily thoroughthinking. Tortoises sometimes design better products or strategies for testing products.
A Simple Testing Puzzle
An old favorite among commonly used speed tests is G.J. Myers self-assessment (The Art of Software Testing, John Wiley & Sons, 1979). The candidate is given an extremely simple program and asked to generate a list of interesting test cases. The specific program involves an abstraction (a triangle). I prefer this puzzle because it tests something testers will actually doanalyze a program and figure out ways to test it. However, there will still be practice effects. Average testers who worked through Myers before will probably get better results than strong testers who have never seen the puzzle. Additionally, I suspect that cultural differences also will produce different levels of success, even among skilled testers. Someone who deals with abstractions, such as geometric abstractions or with the logical relationships among numbers, has an advantage over someone who tests user interfaces or a products compatibility with devices.
Writing a bug report is one of the most basic and important parts of a testers job. Nonetheless, there is a lot of variation in the quality of bug reports, even among those written by testers who have several years experience.
To test this ability, find a reasonably straightforward bug in part of your software that is fairly easy to understand and have the candidate write a report. If none of your products bugs fit the bill, www.bugnet.com can provide one. Its easy to recognize an excellent bug report; however, having sample reports from your staff can help you determine the quality of the attempt. There are many other good puzzles and tests in use, and all are based on a common premise: If you can find a way to present a portion of the job that the tester will actually do, you can see how well the tester does it. You have to make the test fair by designing it so that someone who doesnt know your product can still do well. Thats challenging. But if you can come up with a fair test, the behavior that you elicit will be very informative.
Having comprehensively questioned and tested all your promising candidates, youll have ample data with which to make your decisionand choose a winner.
|Another Simple Testing Test
Draw a simple Open File dialog box on a whiteboard, explaining "This is an Open File dialog. You can type in the file name (where it says File4 at the bottom), or you can click on the open button to open it." Hand the marker to the candidate and ask her how she would test the dialog. Make it clear that she can have as much time as she wants, can make notes on the whiteboard or on paper, and that many candidates take several minutes to think before they say anything. When the candidate begins presenting her thoughts, listen. Ask questions to clarify, but dont criticize or challenge her. When the tester pauses, let her be silent. She can answer when shes ready.
This is a remarkable test in the extent to which answers can vary. One candidate might stay at the surface, pointing out every flaw in the design of the dialog box: There is no cancel button or dialog title, no obvious way to switch between directories, and so on. Another candidate might skip the user-interface issues altogether and try testing the opening of large and small files, corrupt files, files with inappropriate extensions or remote files (specified by paths that she types into the file name box, such as d:\user\remote\File4).
Because of the variation in responses, over time Ive changed how I use this test. Now, I present the dialog as before, giving the candidate a marker and whatever time she needs. I compliment the analysis after she has finished her comments, regardless of her performance. Then, I show her the types of tests that she missed. I explain that no one ever finds all the tests and that sometimes people miss issues because they are nervous or think they have been rushed. I spend most of the time showing and discussing different types of tests and the kind of bugs they can find. Finally, I erase the whiteboard, draw a Save File dialog that is just as poorly designed, and ask the tester to try again.
Because the differential practice effects are minimized by the initial practice test and coaching, the real test is the second one. The candidate receives feedback and is reassured that she isnt a dolt. In fact, most testers are substantially less nervous the second time through.
The second test allows me to find out if this candidate will be responsive to my style of training. Did the candidate understand my explanations and do a substantially better job in her next attempt? If the answer is yes, I have a reasonable candidate (as measured by this test). If the candidates second analysis wasnt much better than the first, she is unlikely to be hired. She might be a bright, well-intentioned, interesting person, but if she doesnt learn when I teach, she needs a different teacher.
Occasionally, I have dispensed with the second test because the candidate did impossibly poorly during the first test or was extremely defensive or argumentative during my explanation of alternative tests. This usually means that Im finished with the candidate. Ill spend a little time looking for a polite way to wrap up the interview, but I wont hire him.