Five Questions With Mike Zintel
Mike Zintel's life goal is to be a train driving photographer. While he hasn't achieved that yet, he has achieved many other interesting things. Like being told he couldn't plug his fifth grade science project back in after it caught fire. Like writing a program to manage underground storage tank inventory. Like helping invent Universal Plug And Play.
These days Mike is Director of Development for a not-yet-unannounced product at Microsoft. Here is what Mike has to say:
DDJ: What is the most interesting bug you have seen?
MZ: I’ll answer a different question. I’ll tell you about the most painful bug. I got a contract to write a billing and workflowfor a law office. The platform was a multi-user MPM/II system. About twice a week, the business data would become corrupt. I held disaster at bay for about two months with a patch up utility. I ultimately lost the contract and the hardware company I was working for failed. Sadly, the same week the customer pulled the plug I found and fixed the interleaved update to a file control block. Even more sadly, I wrote the same style of bug into an embedded data collector that queued data through a local hard drive 2 years later. Concurrency is hard.
DDJ: How would you describe yourphilosophy?
MZ: I’ll assume we are not talking about life critical software. I don’t buy that developers can’t test their own code. I do buy that tests, and developers (and frankly customers) train to correct paths and that diversity will uncover new bugs. I think developers should write correct code. And correct tests. And then run them. With. Then an independent engineering organization with a quality mandate should add mostly automated diversity to the mix. The combination of automation, tests and dashboard/reporting should be tuned over time to catch more and more bugs. When the code looks settled and usable in practice, a beta should be run. Not so soon that the feedback is noise due to ongoing code churn and not too late that feedback can’t be acted on. Concurrent with the correctness efforts, automation and tests that validate stability and performance under load should be developed and applied. When the product looks solid, it should be shipped, with telemetry to validate real world experience. Telemetry data should be collected and processed with automation. The entire machinery should be continuously improved.
DDJ: What do you see as the biggest challenge for testers/the test discipline for the next five years?
MZ: Systems are becoming more distributed and the business cycle is shortening. I don’t think my philosophy above works in this environment. Perhaps some kind of automated “sparse testing” is the answer.
[See my Table Of Contents post for more details about this interview series.]