Five Questions With Scott Barber
Scott Barber has done a lot of stuff for a lot of people, especially people who test for a living. Especially for people who test performance for a living. Books, directorates, co-founderships, trainings, and the-application-doesn't-work testing - Scott has done it all. Back in the day of Windows NT 4.0, he took the time and trouble and tests to become a Microsoft Certified Systems Engineer. These days, Scott has effectively self-certified himself as a World Renowned Performance Testing Expert via his plethora of books and articles and classes on the topic, all of which arose from his general passion for making performance testing more understandable and more fun for testers everywhere. Maybe even you!
Here is what Scott has to say:
DDJ: What was your first introduction to testing? What did that leave you thinking about the act and/or concept of testing?
SB: One Sunday evening I got a call from my manager...
"Scott, be at the Marriott Hotel Conference Room at 8am tomorrow. Our CEO has an announcement to make."
Being a good employee, I did as I was told and went to the meeting. As it turns out, our software development company was "merging" with a huge media company. We were going to become their "development branch". As a result, we were giving up our bonuses, overtime, and our base pay was being reduced, but we were going to get stock options.
As one might imagine, I was less than enthusiastic about this. I called my friend of 10 years before I even left the parking lot of the hotel. I told him the circumstances and asked him if he'd help me update my resume, to which he responded:
"Dude, that sucks. Send me your resume I'll take a look, but, we need performance engineers. You'd be a perfect fit."
I said "Performance engineer? What's that?!?"
He replied "Don't worry, you'll like it."
He was right. I've been a tester ever since, and continue to love it.
During my first few projects, I learned just how difficult it is to develop really good software, how challenging it is to find and focus on defects that matter, and how much I enjoyed trying to solve the technical, political, social, schedule, resource, and business problems embedded in all testing efforts. I didn't know it at the time, but I was quite lucky, because as a performance testing consultant I was almost always treated as a peer to the lead architect. In retrospect I realize that was because I was a generalist who understood how systems, not just components, worked from the bottom up as well as from the UI down, who viewed my role as a service provider to the development effort, not as an approval gate on the path toward release. D
DJ: What has most surprised you as you have learned about testing/in your experiences with testing? SB: Over the years I've been shocked to find out that most testers and development groups view testing as a type of QA/QC function. I was shocked to find out that there were testers (as it turns out, lots of them) that were expected to assess "Pass/Fail", to sign-off that a piece of software was "ready to ship", and to generally make business decisions about software and applications, often without even having access to the business reasons for developing the software in the first place. To me, the notion that a tester/test manager should be gating the ship/no-ship decision is baffling. We (testers) are information providers, we are not product managers, account managers, or Vice Presidents of Software Development.
Asking testers to make these "ship/no-ship" decisions, feels to me like asking the CSI's investigating a case to also serve as the judge and jury for that case without so much as a hearing. It seems to me that testers should be providing their results and analysis to the team, the stakeholders and the decision makers so that they have the data they need to make informed technical and business decisions.
DDJ: What do you think is the most important thing for a tester to know? To do? For developers to know and do about testing?
SB: I think that the single, most important, thing for a tester to know is where they fit in the team, the project, the company, and what they can do to best serve their stakeholders. I think that the vast majority of testing is done for 1 or more of 4 reasons, but that most testers are not made aware of what their test results are primarily being used for. I think that understanding which one(s) of the following are the primary uses for their test results can dramatically improve the effectiveness of a tester or test team:
- To provide information to stakeholders and decision makers so they can make informed business decisions.
- To provide information to developers that help them build better applications.
- To figure out what problems/challenges/concerns end users will have with the software and communicate that back to the team.
- To try to assess compliance of the software or application with legally binding contracts and/or regulations.
I'd like to tell all developers that testers, at least testers in my community, are there to help them make better software. I
'd like to tell all developers that most of the time, these testers are no more interested in defects getting reported to management than they are. That defect reporting is a process "thing", not a tester "thing". That I've met very few testers who aren't happier to skip the bug reporting altogether and just sit with developers and help them "get the bugs out" before the bugs ever get on management's radar.
DDJ: How would you describe your testing philosophy?
SB: As a tester, I am a service provider. The service I provide is quality-related information. The people I serve are developers, stakeholders, end-users, and compliance & regulatory agencies (in that order when given my preference). If I do a good job helping developers build quality software that meets business, end-user, and compliance needs, I've done my job well. When I don't get to work directly with developers, if I can identify issues that threaten business goals of the software early enough to do something about them, I've done my job well. If an application goes into production and the end-users don't experience any issues that I haven't previously identified, I've done my job well. When legal regulations and/or contracts are involved, if I can identify areas that are out of "specification" early enough to get them in "specification", I've done my job well.
DDJ: What do you see as the biggest challenge for testers/the test discipline for the next five years?
SB: I think there are a huge number of challenges facing both individual testers, and the test discipline over the next 5 (and probably more) years. That said, I think that most of those challenges fall into three broad areas.
- Attracting (and Retaining) Good Testers – I recently had the opportunity to meet and work with some software testers in several Latin American countries. Every one of the testers I met are well educated, well trained, well respected members of the team who went to school to become testers, and are proud to be testers. This is simply not the norm. Outside of Latin America, most testers I meet are poorly trained, under respected, of mixed educational background and frequently frustrated with their career. This is a problem. How can we expect to attract folks into testing if so many of our current testers are discontent? I believe that we, testers, need to start viewing ourselves differently if we hope to attract and retain high quality testers. I believe that if we start viewing ourselves as, and acting like, information providers and trusted advisors to software projects, that we will earn greater respect, that our pride in being testers will increase, and that ultimately we will attract and retain increasing numbers of high quality testers.
- Educating Testers – Most testers receive all of their training on the job. When they do get external training, it's generally delivered in short 1-3 day courses with no tests, and no homework, that are well known to be a reasonable way to expose someone to new ideas, but lousy at actually teaching anything. The truth is that testing is a wildly diverse discipline that requires the application of knowledge from a wide variety of fields and that most of the best software testers have taught themselves much of what they know that makes them very good testers. We, testers, need to pay more attention to what skills are common across very good testers and then figure out how to teach those skills.
- Evaluating Testers – I believe we need to fundamentally change how we evaluate the quality and value of testers, especially during the hiring process. Recruiters, hiring managers, and testers widely agree that today's assessments based on certifications, vendor delivered training, question-based interviews, and laundry lists of technical skills are fundamentally lousy indicators of whether or not someone is a good tester and whether or not they will be a good fit within a particular company or team. We, testers, need to get involved in devising better methods. Mismatches between the hire and the role hurt everyone, and currently, the most effective method we have to get a good fit is the old adage "It takes one to know one."