Software testing whether performed by developers, by QA, or by other post-developer testers is essential to ensure final code quality. It checks that the application does what the stakeholders intended it to do and that it meets certain levels of quality. However, when developing in Linux environment, there are unique challenges and problems that are confronted in testing. This article examines the principal ones and suggests some workarounds.
Different Methodologies and Ways to Test
Two of the most commonly used types of software tests run by developers are unit tests and integration tests. A unit test, which operates on a single unit of code such as a method or function, should be:
- Repeatable: You can rerun the same test as many times as you want
- Consistent: Every time you run it, you get the same result
- In memory: It has no "hard" dependencies on an external resource (such as the file system, a databases, or network)
- Fast: It should take a few milliseconds to run a unit test
- Validate one single concern or "use case" in the system: More than one can make it harder to understand where the problem is, and in turn hard to fix and check.
By comparison, integration tests try to verify the interfaces between components. They are meant to make sure the entire application works correctly together. Compared with unit tests, integration tests are slower than unit tests, harder or more time-consuming to create and run fully, and they do not pinpoint specifically where a problem is located.
Integration tests make a lot of sense in most systems in two particular configurations: as an additional layer of regression testing beyond unit tests to provide greater validation of the software, and in legacy code situations, where unit tests are considered hard to write. In the latter case, legacy code was often not written for testability, so that unit codes take too deep an approach to testing. Integration test can begin to tease out and verify the interactions between larger scale components in the legacy codebase.
While each form of testing has its benefits and pitfalls, they complement each other. Due to the multiple Linux distros and different toolchains, integration tests are necessary to make sure that not just pure logic is tested, but that the code works in different environments.
Linux is one of the most customizable operating systems, with a variety of toolkits and components. This is both a strength and a challenge when developing clean code. Many Linux developers prefer to hand-code, instead of using comprehensive IDEs that provide a development ecosystem. Linux has some of the best text editors available far better than what's commonly used in Windows.
Even with the best IDEs (like Eclipse or Netbeans), programming is not as streamlined and integrated as in the Windows world. Because of this lack of integration, developer testing is also not as streamlined, and often loses the benefit of immediate feedback.
Many Distributions and Tools
One of Linux's major strengths is that there are many distributions and everything is customizable. There are at least 10 major distributions, making up the majority of market share, and over 300 active Linux distributions, according to Distrowatch.
Each distribution has different preset configurations and pre-installed items that tests need. While multiple targeting is achievable, unit testing, which should just test code logic, may require a separate setup in different configurations.
Just as there are a variety of Linux distributions, there is also an abundance of development tools. Compilers in different versions require the test tools to comply with those versions and be set up properly. For example, in C++, you may need specific versions of the GCC compilers, and sometimes different Eclipse versions.
While the amount of customization and flexibility is one of Linux's core strengths, it also imposes almost infinite permutations to test.
Because Linux is a popular and generally ideal platform for software development, software developers have used Linux since the kernel's inception. They've also migrated older Windows code to Linux as they move away from Microsoft Windows for their software development platform.
There's simply a lot of legacy code running on Linux. But testing legacy code can be very difficult, especially without the proper tools. New developers, new language standards, updates and new versions, and new project requirements quickly make manual testing very difficult. And without proper automated tests, if you change one thing, you'll probably break something else, and not even know it.
Writing unit tests in legacy code is considered hard. Code written without testability in mind gets either low coverage, or gathers tests which are not really unit tests, but requires integration tests. They don't run quickly, can be harder to debug when an error occurs, and they lack the immediate feedback that real unit tests give.
With few exceptions, the only way to test legacy code is via integration testing. It's usually hard or impossible to write unit tests for legacy code. Various commercial tools can help relieve that problem by being able to test components in isolation from others, without modifying the code. Integration tests can then serve as an additional smoke test to make sure you didn't break the integration between system components, one level above unit tests.
Multiple Targeting / Multiple Cycles
When doing multiple targeting, the code requires multiple cycles of testing. Some targets do not have unit testing capabilities, and require a work around of continuous integration, such as cross-compiling for specific configurations to run the tests. We see more and more multiple targeting when at least one of the targets allow for unit testing that tests logic. Since logic does not change on different platforms (2 is always bigger than 1), having a target that can run the unit tests is a great option to have in order to make sure the code works.