To date, FoneMonkey has primarily been developed by my colleagues and me at Gorilla Logic. In this article, we'll take a look at what FoneMonkey does and how it is used. In a subsequent article, I'll discuss how we got FoneMonkey to do its recording and playback magic, and we'll also look at advanced customization techniques.
Why We Need a Monkey
While Apple unquestionably created a breakthrough user experience (UX) with the iPhone, most iOS developers would agree that the application development experience lags behind what most of us have come to expect from modern IDEs. Ask iOS developers how they feel about Xcode, currently the only game in town for iOS development, and many times the response will be a few muttered expletives and some wistful thoughts about an iOS SDK plug-in for Eclipse.
Perhaps the most disappointing aspect of Xcode is the
UIAutomation framework which Apple released last year to automate user interface testing.
UIAutomation provides an ability to script user interface scenarios so that developers and testers are relieved from having to perform such tests manually.
UIAutomation, however, is currently unable to script so many basic user interface operations that it's incapable of automating testing for even basic iOS applications. For example, scripting text entry into a textfield will successfully populate the field on playback, but will not trigger any delegate methods that would have been triggered if the text were entered manually.
Birth of a Monkey
True practitioners of Agile development rely on automated test suites to guard against regression errors and to enable code refactoring. While manual testing can be effective for testing individual feature enhancements, it is insane to attempt code refactoring without the ability to automatically determine if the application still works as expected after making extensive modifications to code that potentially affects wide swaths of application functionality.
When we first undertook iOS development, we were dismayed that the only automated testing we could do was the kind of API testing supported by frameworks such as OCUnit, an open source xUnit-type framework for Objective-C. Unit testing is of course important, but it's a tedious and onerous approach for testing complex UIs because it requires teasing apart the interface and its associated event handlers, and makes test scripts dependent on the API's of underlying event handlers rather than on the user interface itself.
While not an API in the traditional sense, the UI of an application is certainly an HPI (Human Program Interface) that provides a logical abstraction of higher-level application functionality similar to the way in which an API abstracts lower-level code. As such, it's quite natural to write tests that exercise such high-level abstractions directly, rather than peeling away those abstractions and testing their underlying implementations instead.
For example, in a typical calculator application with a graphical keypad and LED style display, you might want to test that 2 x 2 = 4. To automate testing without a UI, you write what the Apple documentation calls "logic tests" that exercise the non-UI code, and "application tests" that instance your application's controllers and call the handlers associated with the graphical keys on the calculator and verify that the display output has the expected result.
Writing these application tests requires understanding the underlying implementation of the user interface, and the resulting code is then dependent on the particular controllers being used.
In contrast, a functional automation script would look something like:
Tap Button 2 Tap Button X Tap Button 2 Verify Output 4
FoneMonkey not only provides the ability to play such scripts, it also records them automatically. Figure 1 shows how FoneMonkey works with the UI software.
Let's take a look at how FoneMonkey is used to record and playback test scripts.
Welcome to The Monkey House
Although it would be nice if FoneMonkey were packaged as a framework bundle, such user-defined bundles are not currently supported by iOS, so FoneMonkey is instead packaged as a static library and a set of associated files including nibs and images. To use FoneMonkey, you create a testing build target in Xcode that duplicates your normal application build target, but also adds the FoneMonkey resource files and the library libFoneMonkey.a, as well as the additional linker option
–all_load. For detailed setup instructions, see the FoneMonkey documentation.
The FoneMonkey Console
You build and run your new target in the familiar way in Xcode, but when the application is launched, the FoneMonkey Console "grafts" a window onto your app that drops down in front of your application window, providing controls for recording, playback, and script management.
In Figure 2, tapping the Record button hides the FoneMonkey Console and initiates recording. As you interact with your application, each user interface gesture is recorded as a UI component-specific command. If you stop interacting for a few seconds, the FoneMonkey Console reappears, and tapping the More button displays the recorded commands. You can edit an individual command by tapping on it to display the Command Editor in the FoneMonkey Console.