Five Questions With Ethan John
Ethan John professes to not enjoy writing code. Testing, on the other hand, he enjoys immensely. That makes him my kind of Computer Science graduate: the kind who codes only because it gives him an excuse to test. Ethan currently works for Isilon Systems, who I am sure is happy to have the advantage of his love for testing.
Here is what Ethan has to say:
DDJ: What was your first introduction to testing?
EJ: I was in school, and got a job as a research assistant on a project called UrbanSim. It was an Agile house, minus pair programming, so they were doing test driven and iterative development in Java. I had only heard about TDD a few months prior, and my initial experiences with it had been positive. Unit tested code tended to work more consistently out of the gate than otherwise, and I was sold after just a few weeks on the project.
DDJ: What did that leave you thinking about the act and/or concept of testing?
EJ: I had never thought about testing as something unto itself before. Previously in school, testing wasn't something you did consciously so much as it was the necessary evil of debugging something that should otherwise work. Getting down and dirty with JUnit allowed me to see the value in testing as a practice, distinct from just writing code.
It also set me on the path toward testing as a profession. I enjoyed the process of writing unit tests far more than I enjoyed the process of writing the actual code. Thinking through the various kinds of test cases and writing them always left me bored with the process of writing code that actually did the heavy lifting.
It actually took me a while to actually learn that testing was a field unto itself. That seems so obvious now, but the University of Washington teaches a very theoretical CS program, so the various fields available to us as CS majors weren't exactly written down. Even though that first experience with Agile turned me onto testing, I had no idea that it was something I could do for a living.
DDJ: What do you think is the most important thing for a tester to know? To do? For developers to know and do about testing?
EJ: Well, I started out as a developer (or rather, as an aspiring developer), and a lot of the testers that I've worked with don't appreciate the value of system design knowledge. Knowing how a system works as a whole, and having deep technical knowledge about how the code works, is extremely useful for everyone from folks writing white box system acceptance tests to those running through black box Web UI scenarios.
Having a large-system view is important for a lot of reasons. It's useful when debugging ancillary problems that might come up. It's useful for evaluating overall product usability, and for recognizing oddities that developers might dismiss as necessary evils. It's also useful for problems that might arise in ancillary areas. At Isilon, our systems are extremely complex, and early in testing, it's common for item X to malfunction while you're looking at item A, and having a general view of the overall system helps tremendously when investigating those issues.
Some of this system familiarity simply comes with exposure to a product, but there is no substitute for knowing what's going on at a slightly deeper level. Knowing what pieces of the system use which libraries, for example, can be invaluable when trying to figure out an elusive repro, or when trying to write a test plan for a fundamental system design change. Testers in general need to be interested and motivated to learn as much as they can, in as much detail as they can, about the systems that they are working on.
This speaks to something that developers need to be good at -- explaining changes that they make and code that they write to testers. I know it's cliche at this point, but testers and development need to have a very close relationship, and part of that relationship is having a common understanding of the scope of code changes and how they might affect the system as a whole.
DDJ: How would you describe your testing philosophy?
EJ: I view testing as science. Hypothesis, test, conclusion, and repeat. This implies a general view of good testers and how they work. Biologists can't simply pawn off their experiments to people completely inexperienced in the ways of the lab -- there is a baseline of understanding necessary to do anything in a biology lab. Likewise, there is a baseline of knowledge and a scientific mindset necessary to test.
Additionally, testing should be approached as a science. Test cases should be framed in terms of hypothesis/test/conclusion. Sometimes these things are implicit -- if you're testing a Web UI, many of the hypotheses are so common that they needn't be stated. But in the world of system acceptance and other more complicated areas, these hypotheses are often not so well defined, and without them, you often end up testing something that answers no question and provides no data.
Finally, science is constantly informed by history, and testing is no different. Historical results are key to determining how to test in the future. If it provided no benefit to test certain things in the previous cycle, how much benefit is it really likely to provide this time around? Historical data informs future results in the test lab just like in the labs of all other hard sciences.
DDJ: What has most surprised you as you have learned about testing/in your experiences with testing?
EJ: Testing-as-science seems obvious to me. The common test methodologies are rigorous and extremely scientific. I'm surprised by the number of testers that don't see it this way. Many people seem to view testing as something rote (perform these steps, record this result, move on) or as something random ("cowboy" testing, extreme reliance on ad-hoc testing, fuzzing as the whole solution, etc.) rather than as an iterative process that generates a wider view of the stability and usability of a system.
DDJ: What is the most interesting bug you have seen?
EJ: Bug 5112 in our system here at Isilon is about the fans in our nodes being prone to failure upon introduction of "foreign matter." A tester with rather long hair had been standing in such a way that his hair was pulled into the node, causing the rear fans to stop working, and the node to overheat. It was closed WORKSFORME by a developer with no hair at all.
[See my Table Of Contents post for more details about this interview series.]