For various technical reasons, a lot of Rockmelt's Chromium customization is done at the
#ifdef level and by adding new files to existing Chromium static libraries. This leads to an unusual situation for testing. You don't have nice, isolated components to test, where you can just mock dependencies. The bottom line is that the native client side of Rockmelt has virtually no automated testing. There is an off-shore QA team that manually tests the product (or rather, various parts of it since it is too big to test entirely) every night and reported defects are subsequently assigned. Due to the complexity of Chromium itself and the deep integration of Rockmelt, it is often difficult to figure out what's wrong from the black-box description of the QA team ("When I drag a URL to a friend icon, sometimes it doesn't show the share dialog"). A lot of time is spent trying to reproduce bugs and find the root causes. The Rockmelt technical leadership identified this situation as a serious quality and productivity issue.
- Implementing Software-Defined Security with CloudPassage Halo
- Using Anomalies in Crash Reports to Detect Unknown Threats
- High Performance Computing in Finance: Best Practices Revealed
- How to Create an End-to-End Enterprise Payments Hub
They brought in Google's testing guru Miško Hevery to talk about testing. Shortly thereafter, I started a pilot project to make Rockmelt's client code testable. I discovered early on that refactoring Rockmelt's code outside of Chromium's libraries was the key. With a lot of work in which I had to dig my way backwards and define interfaces and wrappers for transitive dependencies, I was able to completely decouple one Rockmelt service, called AppEdge, and make it fully testable with no concrete dependencies. I'll give you an idea of what this service does so you can appreciate the complexity: It manages a list of custom apps and RSS feeds that are displayed on the edge of the browser. It supports downloading over the network and installing/upgrading Chromium and Rockmelt extensions. It responds to user clicks and menu selection, it launches apps, and it sends notifications to other components. Overall, it had about 15 dependencies and served as the source of about 10 events/notifications. At the end, all this technology was completely testable.
Testing via Plugins
Numenta does unusual work, such as reverse engineering the brain and building intelligent machines based on the neo-cortex. Numenta created a platform for intelligent computing called NuPIC that had a C++ engine with Python bindings, several Python frameworks, and a set of Python tools. At Numenta, testing was paramount and there was a battery of tests starting from C++ unit tests for the core engine ranging up to Python-integration tests and performance tests. As a machine-learning platform, NuPIC presented many testing challenges. In the early days, there was a lot of trial-and-error experimentation, but with time, better tools and understanding of the internals allowed more introspective tests and metrics.
My first project when I joined Numenta was to design a plugin framework to allow third-party developers to plug their custom C++ algorithms into NuPIC (NuPIC has always supported custom Python algorithms, but sometimes users needed C++'s raw speed). It was a cross-platform C++ framework and it presented some unique testing challenges. One of the major goals, which is very difficult to do in C++, is to allow the plugins to be built using different compilers than NuPIC used. This way, third-party developers were not locked into Numenta's choice of a compiler and both sides were free to upgrade their tool chain without breaking compatibility. How do you test such property?
An important weapon in the arsenal was incremental integration while running the plugin framework side-by-side with the hard-coded algorithms in the existing runtime engine. This approach was beneficial because it allowed black-box testing of the plugin framework, as well as running experiments using the hard-coded algorithms wrapped as plugins. I created plugins that reused the logic inside the fixed algorithm nodes and then constructed experiments that used the same networks with the same algorithm types and connectivity except that on one network the nodes used were the concrete (old) sub-classes of the runtime engine, and on another network all the nodes were
PluginNode hosting the same algorithmic code. This allowed apples-to-apples comparison both in terms of accuracy and also in terms of performance (maybe the plugin framework does something funky and wastes a lot of time or memory at some point).
C++ Test Frameworks
With C++ (just like in any other language), you have the choice of writing your own test framework vs. using some off the shelf test framework. Normally, I opt to use existing tools, but test frameworks may be different. One reason you would want to build your own is to get tight integration with the build system. Another reason is that most test frameworks are either geared toward developer-level unit tests or QA-level running all the tests. Finally, writing a test framework is easy and fun and gives you full control. If you are working within some application framework that provides a test framework, try to use it first. For example, at Rockmelt we used the Google/Chromium code base, which comes with its own test framework, so it was a no brainer.
I use mock objects a lot in my tests, but I have never tried any mock framework. My only excuse is that I never felt the need. When I write code, I think of each dependency as a potential mocking target that I need to provide via an interface. At this point, writing a mock takes literally seconds and, if I need to, I can even write a little tool to generate a mock object automatically from an interface. This generally works well for me and enables testing using the test framework of my choice.
Deep testing is guaranteed to reduce the number of undiscovered issues down the line. In C++, in particular, it can present difficult problems that require original solutions. In rare cases, it might require rebuilding the code under test. But in other situations, developing plugins to the main program that duplicate its activities will provide a viable testing sandbox. And in other circumstances, it might be necessary to write your own testing framework. But with the right mindset and approach, deep testing is very cost-effective. Non-trivial code will contain bugs. And it's up to you to use deep testing to remove them before they show up in the field.
Gigi Sayfan specializes in cross-platform object-oriented programming in C/C++, C#, Python, and Java with emphasis on large-scale distributed systems. He is a long-time contributor to Dr. Dobb's.