DDJ: Sebastien, over 25 years ago, QNX Software Systems focused on the message-passing paradigm for its operating-system solutions. Are message-passing operating systems relevant today?
SM: If anything, the qualities of isolation and protection introduced by the message-passing paradigm are more important now than they were 25 years ago. Take, for example, the issues of OS reliability and security. (By security, I mean protection from external attacks as well as from "bad" code running in the system.) Both these issues pose a huge challenge to traditional OSs, which lump large amounts of unrelated functionality, including file systems, networking stacks, and device drivers, into a single, large monolithic kernel. In this approach, coding errors in one block can easily affect a totally unrelated block, resulting in total system failure.
A microkernel, message-passing OS eliminates these problems by completely isolating applications and traditional kernel functional blocks from one another. With this approach, the OS can immediately detect an invalid attempt to access resources and indicate what line of code made the attempt. At runtime, a software watchdog can intelligently decide how to handle the error. This can range from a simple "prevent the access but let the functional block continue" to "completely stop and restart the function block." Either way, the OS and all other processes continue to run.
DDJ: One of the areas you've been working on recently is support for multi-core processors. What is the role of multi-core in the embedded world?
SM: The growing complexity of embedded applications has created four major hardware issues: performance, power consumption, board area, and cost. Multi-core chips directly address all four issues. They offer greater processing capacity and consume less power than conventional uniprocessor chips, and they use less board space (which cuts costs) than multiprocessing systems built of discrete uniprocessors.
Nonetheless, migrating legacy code to multi-core can be a challenge. That's why QNX offers a choice of multiprocessing models, allowing developers to choose the model best suited to their requirements. For instance, we support symmetric multiprocessing (SMP) for applications optimized for multi-core operation; we've also pioneered bound multiprocessing (BMP), which lets legacy applications run on a specified core while allowing parallelized applications to run in full SMP mode.
DDJ: Another area that you've been working on is "adaptive partitioning." What is it and why is it important?
SM: Some OSs support time partitioning, which lets you place applications into partitions and allocate a guaranteed budget of CPU time to each partition. Unfortunately, traditional partitioning schemes are very rigid. For instance, if you allocate a partition 30 percent of CPU time, it will always consume 30 percent, even when processes in that partition have no work to do.
That's why we created QNX Neutrino adaptive partitioning. Unlike conventional schemes, it maximizes CPU utilization by distributing a partition's unused CPU cycles among partitions that could benefit from the extra CPU time.
Adaptive partitioning lets you contain denial-of-service (DoS) attacks and eliminate single points of failure. For example, in a nonpartitioned real-time system, a single, high-priority thread can maliciously or inadvertently monopolize the entire CPU; nothing else gets to run. But with adaptive partitioning, threads in other partitions can still get their guaranteed share of system resources, even if they run at a lower priority than the runaway thread.
DDJ: Real-time and embedded systems have become much more pervasive over recent years, particularly in terms of consumer's everyday lives. Cell phones, MP3 players, automotive electronics, home appliances, and the like. Is it a challenge to support all of these disparate, small-footprint devices for an OS company?
SM: The componentized nature of QNX technology makes it much easier to support a wide array of systems. For example, the exact same USB stack used in a home appliance can also run in an automotive head unit. All system testing done on that stack is immediately inherited when it is used in other devices.
Wherever possible, we also abstract hardware dependencies from middleware components, allowing, for example, a system integrator to run the same Adobe Flash player on a home appliance as on a PND. This approach allows for greater reusability without forcing massive code rewrites or intensive retesting.
DDJ: Many OS vendors have embedded features such as security and databases into the OS kernel, which makes you think that as devices get smaller, the software to support them gets larger. Is that the case?
SM: Most OSs don't have efficient models for adding system services outside of the kernel. So, if you want code to run fast and to be easily accessed by other services and applications, you must stick it in the kernel. This leads to two options: 1) creating many different kernels for different product lines, or 2) using the same larger kernel across many products. The first approach leads to much more testing; the second approach forces lower-end products to incur higher RAM and ROM costs.
QNX microkernel architecture helps to streamline the device footprint. Most OS services run as optional components that load and unload on demand; they're never "linked" to the kernel. The result is a consistent, reliable software foundation where the same kernel can be used across all product lines.