Five Questions With Michael Corning
Michael Corning is a man on a mission. No small mission, either: his goal is to remake testing at Microsoft, and then throughout the rest of the world. Michael joined Microsoft's Engineering Excellence group in part to pursue this mission, which involves patterns applied at every level, concepts ranging from robotics to Aristotle, and innovative testing approaches from across Microsoft and the industry. Dunno whether he will be successful in this endeavor; regardless, I imagine he and his compatriots are in for a fun ride. (Full Disclosure: I am a founding compatriot.)
If this seems an audacious goal, well, Michael is an audacious guy. He is so passionate about politics both in general and in specifics that he is Team Lead for the Microsoft Political Action Committee. As a volunteer planning commissioner for his town he is leading a comprehensive for-the-next-century replanning effort which involves patterns applied at every level, concepts ranging from robotics to Aristotle, and innovative technologies from across Microsoft and the industry. As before, while I don't know whether he will be successful in this endeavor, I imagine he and his compatriots are in for a fun ride.
(Can you guess what he talks about in the testing classes he teaches? Yep: patterns applied at every level, concepts ranging from robotics to Aristotle, and innovative technologies and testing approaches from across Microsoft and the industry.)
Here is what Michael has to say:
DDJ: What do you think is the most important thing for a tester to know? To do? For developers to know and do about testing?
MC: The first thing a tester needs to know about testing is that you cannot test what you do not understand. If follows that anything that enhances a tester's understanding of that which they are testing is paramount. When we refer to testable software, an overlooked attribute is understandability. We've made great strides toward working better with our developer colleagues, to the extent the devs are at least writing unit tests (even more progress is possible with devs who adopt a Test Driven Development approach). Other ways testers can enhance their understanding of any tested artifact is to transcribe the model of the artifact they carry in their head to, ideally, a machine-readable format. If you can't model it, you can't understand it; if you can't understand it, you can't test it. Yet another extremely powerful tool for testers to master are design patterns.
DDJ: How would you describe your testing philosophy?
MC: Funny you should ask. I just submitted a paper to the International Symposium on Software Reliability Engineering entitled, "A Tester's Guide to Aristotle's Theory of Virtue - A Model of a Happy Tester." So I guess my answer to you is the same as the answer I gave to Forbes magazine over a decade ago: I'm an Aristotelian tester. Actually, I'm part Neo-Platonic, too. I am convinced that ideas predate awareness, rather like Plato's "Forms." I call them memes. At the moment, my favorite meme is the pattern language. Actually, I'm beginning to rethink my notion of memes: at first, I saw design patterns as memes; now I'm beginning to think memes are patterns, a subset being design patterns. Put a different way: my work on memetic engineering is refocusing as pattern language engineering.
I am a Certified Public Accountant by training. In fact my Master's thesis in 1980 studied whether financial auditors were auditing around or through the computer. This research found that a majority of auditors considered financial accounting software to be a black box. Only a small minority of auditors used white box auditing/testing techniques. This background continues to inform my view of software testing. In my mind, the tester is an auditor. The function of the auditor is to attest. In financial auditing, the auditor attests with his or her signature that the financial results fairly represent the financial results of business operations. From this attestation, other decision makers make their decisions. Both the auditor and the tester provide information to the eco-system. Investors buy or sell based on audited results; product managers ship or hold product based on tested results.
I'm also a scientific tester. This role is closely related to my audit philosophy. I share the view of Philip Armour that the test org is ideally chartered with finding a class of information that neither dev or pm/ux [user experience] colleagues are tuned to. The tester's job is to identify questions (and subsequently answers) about the application under test that have never been considered. We call them UnkUnks (unknown unknowns). Yes, Secretary Rumsfeld took some heat when he tried to explain the role of UnkUnks in the War in Iraq, but political flak notwithstanding, UnkUnks are, by their very nature, un- or underappreciated. The ironic thing about UnkUnks, though, is that they are pervasive, and everyone encounters them or their consequences, and frequently. So common are the UnkUnks that we have a phrase for them: The Law of Unintended Consequences. Virtually everything else I think about testing comes back to the UnkUnks. Design Patterns play a role in understanding the UnkUnks' potential or presence, modeling squeezes them out in the process of model development, and the philosophical approach I take towards the craft of testing also helps find out things about software that designers and developers didn't or couldn't consider.
DDJ: Is there something which is typically emphasized as important regarding testing that you think can be ignored, is unimportant?
MC: Though I wouldn't go so far as to denigrate the following as unimportant, I do believe they are generally misunderstood: test case count and code coverage.
You can see from my previous remarks that I spend precious little time thinking about testing in terms of the number of test cases I have. I do my best to use the machine to generate test cases for me. As George Gilder said, "You make money by wasting what's plentiful." Well, my human bandwidth is far from plentiful. My imagination and intuition, my passion for discovery are limitless. This is why I focus so much of my attention on finding ways for testers to use technology to test technology.
DDJ: What do you see as the biggest challenge for testers/the test discipline for the next five years?
MC: Two things keep me up at night: where is our catalog of the hard problems in test? How can we leverage technology sufficiently to catch up with the complexity of technology?
The catalog of Hard Problems is our collective record of second order ignorance (viz., we have the questions, not the answers). One of our bigger problems in test is that we're the team in charge of coming out of third order ignorance (UnkUnks), and you can't catalog what you don't know you don't know. You can work on describing the process that you can use to identify unknown questions, but the test community hasn't even done a good job with second order ignorance. As a scientific discipline, we lag behind on two fronts: almost all other disciplines exceed ours in second order ignorance and in the use of modeling.
One of the things that helps me sleep at night when I start to worry about the second thing that keeps me up at night is that many of my colleagues, the best and the brightest, are doing work in test at a deliberately abstract level. Modeling and patterns are the two best examples. As our programming languages continue to help us write test code and infrastructure that take advantage of more and more design patterns, we are making progress. As modeling tools become more ubiquitous, we will make even more progress.
One of the most intriguing things I'm watching is the advent of software that enables robotics. In my view, the Microsoft Robotics Studio promises to bring a whole new level of productivity to testing. But I'm a little ahead of the curve here, so additional comments and observations will have to wait for another time....
DDJ: What has most surprised you as you have learned about testing/in your experiences with testing?
MC: Legend has it that as Charles Simonyi (Father of Office Word and Intentional Programming) was leaving Microsoft someone asked him what surprised him most about the company. His answer, "How conservative the Microsoft Developer is." I'd say the same thing applies to testers and to test management. For example, it took almost a decade for model-based testing to take deep root in the test community. There are many reasons for this, but being very conservative is one of them. The other thing that constantly surprises me is just how daunting is the task of testing. I heard that someone analyzed all the dependencies in Windows, alone, and the graph used paper forty feet in length. I've seen how far a test tool like Pex can get into the CLR simply by unit testing a resource provider component, and I'm blown away. Put differently, I've been to no other place where it's possible to feel like you're the smartest guy in the room and the dumbest guy in the room - at the same time. Camus was right: the work is absurdly difficult; and therein we find the nobility of it.
[See my Table Of Contents post for more details about this interview series.]