The first is to make the minimal necessary changes to maintain the
stability and momentum of the development. This may work for a while,
but will likely force the company to embrace the other choice. That
choice is to retire part or all of the system and rebuild it on a
foundation that will, at the minimum, meet the current and near future
requirements, but optimistically, be a profitable foundation for a
significant time into the future.
This paper describes the journey taken by engineers at Intellibot
Robotics to port the functionality of an autonomous robotic floor
scrubber (referred to simply as "the robot" for the rest of the paper)
from one hardware platform without an operating system to another
running Linux. Our intention is to share the hands-on knowledge gained
and to provide examples useful for making informed business decisions
about using Linux for an embedded system. Knowing the capabilities that
Linux has will guide this choice. This paper illustrates some of the
challenges we faced in discovering those capabilities.
Background on "the robot"
The development project that produced the robot dates back to the early
90's. To appreciate the migration, first an understanding of the old
robot is necessary as this is the platform that we replaced with one
running Linux. A 68000 main processor and a handful of peripheral
microcontrollers controlled the machine. To drive down cost and add
some headroom for more features, later generations migrated to a 68332.
The architecture was fairly typical of small microcontroller-based systems of the time: it had no operating system, the code base was relatively small and for what it had to do, it was adequately fast. It didn't use an operating system mainly because there wasn't a need for one. The code was single-threaded, and any OS-type services needed were easily written without concern of what the rest of the system was doing.
Although the product was technically functional, it lacked features to satisfy market needs. Over time, the purpose of the robot grew from simply being an intelligent floor cleaner to becoming an autonomous machine that needed even less human intervention. This growth directly translated to more technical requirements. There were more sensors to manage, more data to collect, more computations to complete in the same amount of time.
With the first few generations of the robot, the addition of new features implemented in software was largely successful. In late 1999, however, the requirements put on the machine began to exceed what the platform could provide. There followed an episode involving the introduction of a large amount of software to this system to meet these new requirements.
Not long into that effort, we discovered that the processor could not run the new software and still meet the requirements of controlling the robot in real time. In embedded systems development, this is what we like to call a "dismal failure". The product development stalled on that front; entrance into that particular new market was put on hold.
After that, requirements were changed to enhance how the machine performed in existing markets, since there was not time to improve the hardware. This new feature set kept the robot a viable product while our engineering team worked at squeezing more functionality out of the existing hardware, but that effort was quickly approaching the processing limits of the platform.
Making the transition
In late 2003, with the company under new ownership, the requirement to
enter new markets became a priority again. Several discussions of the
current capabilities versus the desired capabilities quickly showed the
necessity of migrating the robot to a new platform.
Not knowing whether this would be a triumph or a disaster, our management simply stated that the product should be moved to Linux. Much of the robotic development in academia is happening on Linux and our owner wanted to leverage this. Though we were familiar with Linux on workstations and servers, we hadn't used Linux as an RTOS.
There were many questions we had to answer to make the transition successful. However, with the direction now specified by our management and with the appropriate financial backing, the decision to proceed with Linux was made without any of the research that would typically occur when making such a significant engineering change.
There was no comparison with or evaluation of other RTOSes. There was no investigation if Linux was even appropriate from a GNU licensing point of view. It was known that Linux was free but that's all. Our management was comfortable taking the risk that Linux would meet our needs.
Though we can gladly report that the migration was a success, we recommend that a complete investigation by both engineering and management be undertaken to answer these questions in advance. The rest of this paper exposes some of these questions and the answers we found while attending the School of Hard Knocks.
Finding the right hardware platform
Shortly after the decision was made to move to Linux, we needed to find
a suitable hardware platform capable of running Linux that would also
fit the needs of the robot. The current motherboard of the robot was an
inexpensive custom design ideallysuited for the robot, but lacked
support to run Linux.
Therefore, the search began for a low-cost single-board computer (SBC) that could run Linux. Though we preferred a PowerPC processor, the SBCs we found were far more expensive than equivalent x86¬based counterparts.
Using the existing subsystems of the robot required an SBC with an SPI bus. Yet, x86 processors --the target where Linux is most mature --generally do not have an SPI port. Internet searches did not yield anything. By chance, we spotted an ad in Circuit Cellar Inc. that showed an x86 board with an SPI port, which the vendor claimed ran Linux. The cost of the board was about half of our existing custom motherboard, too. The migration was now on its way.
Looking back, here are some things to investigate when looking for a platform to run Linux. If the hardware expertise does not exist in-house to design a cheaper target board, then an off-the-shelf SBC is probably the most cost-effective.
Also, look for a vendor who not only advertises that Linux runs on their products, but has the technical support available to help get your application working with Linux. The vendor we chose offers unlimited free technical support, a bootable Linux file system on a CompactFlash, extensive downloads and support on their web site. Early on in powering up the system, we discovered a BIOS problem and their tech support quickly fixed the problem. Without this, we would have added a significant delay in getting our problems resolved.
A nice side effect of the vendor we chose was that they are now offering SBCs using a faster ARM processor at a lower cost than an x86. With the product now ported to Linux, the migration to that board --or any other SBC that supports Linux --should entail little more than a recompilation, a significant benefit not yet realized.
Because the SBC had Linux pre-installed, it was simply a matter of applying power and connecting its serial port to a computer. The software support on the stock Linux file system was reasonably complete, and readily provided a login prompt on one of the SBC's serial ports, as well as card support and network drivers for the PCMCIA wireless Ethernet adapter.
In very little time, the SBCs were communicating with other computers on both wired and wireless networks. The next important step was to block out a period of time to explore the capabilities of Linux on the SBC. Understanding how it behaves in a normal situation is imperative before unfortunate situations arise.
Getting started
Up to this point, the new SBC was indistinguishable from other Linux
systems on the network: it responded to ping, telnet, and ftp. The file
system layout was typical of a Linux system. It knew the correct time
and date.
At this point, a key feature of moving to Linux was made apparent and became a cornerstone to the entire development process. The file system of our Linux server could be NFS-mounted on to the SBC's file system. That meant that files could be effortlessly transferred between the development server and the robot.
Though this mount worked over the wired Ethernet connection, the real benefit was that it worked over a WiFi connection, too. This would mark the first time that the development of the mobile robot would become untethered. The foundation had been laid, upon which the first level, a development environment, would be built.
The more development in an embedded environment resembles native development, the faster and easier development in that embedded environment will be. Cross development adds at least one step to the build process, and cross debugging is tricky and cumbersome in comparison to native development.
Despite this, we had established a workable, though tethered, cross-development and cross-debugging environment in previous generations of the robot. Moving away from this familiar environment to another was a risk. However, the cross development and debugging system set up in Linux turned out to be equally as powerful and simpler to get working -¬and it was wireless.
The file system that came with the SBC fit on a 32 MB Compact Flash with some room to spare. Complete as it was for getting the system up and running, it had somewhat limited support in the way of tools and run-time libraries. The file system included no development tools.
The version of the libraries on the file system was of an older version than that of our other Linux systems. The lack of on-board development tools was not a limiting factor, since it was not in the plan to build applications on the SBC; our Linux servers were designated for that. Programs would not have to get very big before building on the target would be time-and space-prohibitive, and a Linux workstation provided much better facilities for editing source code.
The build process quickly reduced to building the application on the server, copying it into place on the target SBC via an NFS mount, and running it from the SBC. This was much more convenient than programming ROMs, as was done in the past.
Another difference moving from a no-OS platform to Linux was that there were now libraries of code that could be utilized by the application. The challenge was how to use these in a resource-effective way.
Because the initial set of run-time libraries on the SBC was limited and of an older version than that on the server, the programs either had to be so simple that they only used common run-time libraries between the SBC and the server, or programs had to be statically linked.
Statically linking the application
A statically-linked application can run on the SBC regardless of the
libraries present on the file system, since such an application
contains all the runtime support it requires. This makes even small
statically-linked applications huge compared to the same programs,
dynamically linked.
There is merit to having little or no run-time support on the file system if the system is small with a limited number of applications and tools. In such an arrangement, all applications (including the command shell and widely used tools such as ls, cp, and mv) would have to be statically linked, each containing its own copy of the libraries it requires to run.
Resources are quickly consumed when there are many applications each with their own copy of the identical libraries. In contrast, dynamic libraries save RAM because each statically-linked application loads its own run-time support into RAM at runtime.
Dynamic libraries are loaded into memory once regardless of the number of applications loaded at any given time. Linking to the libraries on the SBC, rather than the server, though possible, was not considered because they lacked complete support for POSIX threads.
Although the robot application was single-threaded, there were already thoughts of splitting it apart into many threads, so POSIX-complete libraries were important. Considering all these factors, we decided to make the libraries on the SBC match what we had on our servers. This was simply a matter of copying the libraries from the server to the SBC. Since the SBC's runtime environment could easily be a subset of that of the server's, only the libraries that were needed were copied first.
Mainly using ldd, a utility that determines the shared-library dependencies of a given application, a good first guess as to the contents of this minimal set was determined. As various required tools were put onto the file system, accompanying run-time library support was also determined and added, until the support was so complete that it was no longer necessary to check the dependencies.
Libraries, tool chains and building
blocks
That set of libraries is what resides on the file system today,
providing a very complete and convenient environment for development,
debugging, and deployment. Applications can be built on the server,
dynamically linked, copied to the target, and run. Debugging tools such
as gdb were deployed in exactly the same way; the very same version
that runs on the server also runs on the target.
Another advantage of this approach is that it was not necessary to rebuild the development tool chain in order to build applications for the target. The tool chain already installed on the server builds applications for the target just as easily as it does for itself. With this arrangement as a baseline, we could, with reasonable confidence, rebuild the tool chain on this platform or some other, and continue development from there. The vendor's initial file system layout was helpful from the beginning, as it facilitated quick startup early on, and provided the basis for configuration changes and improvements made since then. This incremental approach to modifying the layout and contents of the file system kept the number of variables low, which was key to maintaining stability of the system. Changing one thing at a time, we developed and maintained a high level of comfort with the system as a whole.
Also beneficial in getting the development process started was having another Linux system readily available for building applications, and as a repository for developing and storing source code. On a more capable target system, the development could be done completely natively, editing, compiling, building, and running earlyapplications all completely on the target. In our case, the target was not sufficient to carry all those responsibilities, nor did it have to be. The server was a base camp, from which many reconnaissance missions into new territory on the target were launched and successfully carried out.
Having a PCMCIA slot on the SBC meant one thing above all else: our product had the ability to enter the world of wireless networks by simply inserting a standard 802.11 card. Many embedded systems are physically stationary. For development, they sit on the workbench, and the network comes to them in the form of an RJ-45 jack.
Before the Linux port, the only line of communication with the robot was an RS-232 port and a radio modem to break the physical connection. Breaking the physical tether was an important part of protecting a company's investment in development equipment: laptop computers in motion tend to stay in motion, regardless of any sudden stop on the part of the robot on which they're riding.
Kernels, modules, and device
drivers
Linux is large compared to a typical RTOS, but it is organized so that
understanding it does not require swallowing it whole. Like an onion,
Linux can be peeled away in layers and understanding each layer can be
accomplished without the need to dig deeper.
Unless your name frequently comes up in the Linux kernel source tree, you're quite likely still several layers up from the core. There's nothing wrong with that; we are nowhere near all the way down. The important thing is that you understand just enough to get your job done, and this is possible with Linux.
Linux is widely regarded as a complete operating system, including a kernel, a file system layout, a command shell, tools, and sometimes even a graphical user interface. This may be true, but Linux refers primarily to the innermost part, the kernel, originally written by Linus Torvalds. The kernel is what boots up the computer, makes it run, and provides the operating-system services that applications need to make the computer do useful things.
Whether or not one modifies the kernel (as we have done), porting a product to Linux at least initially involves the kernel, as it forms the basis for all other work required to get the system running. Depending upon the hardware vendor, the kernel may already be installed, which is what we recommend for a first project.
Linux is developed and released under the GNU General Public License. Its source code is freely available to everyone. This open-source nature of Linux allows anyone to view, modify, apply, and extend it for their own purposes. Chances are, if a person adds to Linux, they want to make their work accessible under the same License, and as a result, Linux gets better.
For each piece of hardware supported by Linux, it is almost certain that somebody somewhere has written a device driver to make the hardware work with the system. Device drivers are distinct pieces of software that enable a certain hardware device to respond to a well-defined internal programming interface, hiding how the hardware device works behind that interface.
In such an arrangement, users (and applications) do not have to understand how the hardware works in order to use it [1]. If there is specialized hardware in your system, you will have to decide whether to control that hardware completely in your application, or write a device driver to do it. A later section discusses this decision at length.
Whether the goal is a device driver or not, the main (and easiest) approach to building onto the Linux kernel involves no changes to the kernel itself: making a kernel module.
Unlike an application, which performs a specific task from beginning to end, a module attaches to the kernel and registers itself with the kernel in order to serve future requests [2]. Modules can be built completely separately from the kernel source tree. The kernel can load and unload them easily; there is a well-established mechanism inside the kernel to do this. Modules are simply a vehicle to dynamically add functionality to the kernel.
These two constructs, modules and device drivers, are often related, but are not synonymous. A module may contain a device driver. A device driver may be composed of one or more modules. While a device driver may exist in source code files that form one or more modules, that device driver may also be built right into the kernel (it then becomes a part of the "kernel proper"), eliminating the need to load the module after the system has booted.
Loadable modules provide flexibility and changeability at the cost of having additional files on the file system, which must be loaded before the device driver can be used. Building the device driver as part of the kernel eliminates the need to load the module, but requires a rebuilding of the kernel (and reinstallation of the kernel image)when the device driver is modified.
While a device driver (or other addition to the kernel) is in development, it can take the form of loadable modules until it gains a level of maturity. This can make module and device driver development easier since the kernel source can remain unmodified.
Writing device drivers
Not all of the old robot was to be redesigned or replaced. A
self-imposed requirement was to have the new SBC communicate seamlessly
with several existing peripherals on an SPI bus. New SPI peripherals
would also be added. The problem of controlling this interface easily
reduced to two parts: writing software to control the new processor's
SPI port, and writing software to implement the communication protocols
employed by the peripherals.
The latter naturally fell into the purview of the application. Protocols do (and did) change. Doing this work in the application allowed for support of existing protocols and the ability to add new ones easily. The former presented a fundamental decision: should the control of the SPI port occur in the application or in a device driver?
In an embedded system, it's common practice to roll all the code required to control the entire system into one piece of application-level software. While this is often the only choice on bare metal, when an operating system forms the basis of the software, this all-in-one approach rejects many of the benefits of building an application on that operating system. Most notably, it ignores the ability of the kernel to arbitrate demands for access to system resources, especially input/output devices such as the SPI port.
Knowing that others had successfully implemented support for this type of interface completely in application space, it was tempting to code up control of the SPI port completely within the application. Early on, this is how the hardware was tested and from this we gained understanding of running the SPI port on this new platform.
In the long run, however, controlling the SPI port entirely in application space would not meet the requirements. This approach did not make good use of the operatingsystem benefits. For instance, the original, single-threaded control system was about to become multithreaded (the current system spawns over twenty threads, several of which access the SPI).
Managing the SPI in the application meant having to develop elaborate thread-safe software to arbitrate accesses of the interface. In addition, since a Linux-¬based application cannot respond directly to hardware interrupts, it is resigned to polling,delaying, and looping. The operating system already does this low-level work, and is very efficient at it; all the programmer must do is provide, in a device driver, the details of how to operate this particular device.
With a device driver, multithreaded, thread-safe, application-level access to the device becomes effortless. Also, accessing hardware through a device driver affords the system designers a convenient separation of mechanism (working the device) from policy (using the device) [3].
The application doesn't need to know anything about how to operate the device. The application is insulated by the operating system from changes at the level of the device. In theory, the interface, or the way it is operated, could completely change without requiring any change in (or rebuilding of) the application.
Having seen the benefits of device drivers, the time came to learn how to implement them. Rubini & Corbet's book Linux Device Drivers (O'Reilly) became required reading, and remains the preferred reference book.
First attempt at a kernel module
A first attempt at a kernel module containing a device driver for the
SPI was completed in a week. In retrospect, this first module was a
small step, essentially moving the application-level, polled SPI
support into the kernel. Small step or giant leap, the entire robot was
now beingcontrolled using standard open(), write(), read()and
close()system calls from application space to access the hardware. The
application now contained none of the nuts and bolts of handling the
interface.
Though the application now had nothing to do with controlling the hardware of the SPI port, the read()and write()calls were not any more efficient at runtime than the app-level code they replaced. The SBC's processor provides rich hardware support for SPI.
To use this support to its full potential would require redesigning the driver to exploit the ability of the processor's internal SPI hardware to interrupt the CPU. This would remove any polling and looping in the SPI driver code, freeing up more CPU cycles for the application, and making SPI communication as execution-efficient as it could be.
While not especially easy, the task of rewriting the driver to fit an interrupt-driven model was made as easy as it could be, thanks again to Rubini and Corbet. Not only did they lay out all the elements necessary to make a design like this work (and work it does),they also focused attention on almost all the pitfalls we were to encounter while writing the device driver.
This information enabled us to preempt many of the typical problems accompanying development in this space, especially those involving race conditions and resource contention that are so common when fielding multiple requests for hardware and common memory.
This experience repeated a message that emerges again and again when one works with open-source software at any level: support for what you want to do almost certainly exists. It may take the form of a book, a HowTo, a Google search, a newsgroup or mailing list, a phone call to a colleague, a visit to a local User Group, or inspection of the source code itself. The independent variable becomes your ability and willingness to reach the information you need.
Releasing control
One of the biggest changes going from the old embedded system to the
SBC running Linux was powering down the machine. In the past, turning
the key switch to OFF killed power to the robot and the hardware shut
down properly.
Doing the same to the new robot with Linux running on the SBC was dangerous. The file system could easily become corrupt because the kernel would not have an opportunity to write cached data to the file system. Since the application was no longer running on bare metal, the application should no longer directly control the hardware.
The safest approach is to allow the kernel to perform all of its shutdown tasks, such as unmounting file systems and taking down network interfaces, before removing power. In cases where the file system is mounted read-only, it may be safe to simply cut power.
If write caching is disabled on a read-write file system, it can be made safe, provided the file system is allowed to become quiescent before power is removed. Since the application must relinquish all control of the system long before the kernel shuts down, we needed the kernel's cooperation to solve the problem.
The Linux kernel did not disappoint. Within a day, including research (the Linux kernel source, and Rubini & Corbet again), we had written a 41-line C module that registers with the kernel to be notified upon halt or shutdown. A startup script loads the module. When notified by the kernel, the module exercises the I/O line connected to the relay disconnecting power to the machine only after the kernel finishes all its shutdown tasks. This provides a clean shutdown of the system.
Kernel timing
Once the interrupt-driven SPI device driver was implemented and the
application was using it to communicate with all the peripherals to
control the robot, it was time to determine the character of the
traffic on this multiplexed SPI bus. After all, this was a stock Linux
kernel, not a flavor of Linux enhanced for optimum real-time
performance.
Hard timing requirements could not be imposed as in a typical embedded system. Had the interrupt-driven model really allowed the multiplexing of the SPI bus as we had envisioned? What did the timing between SPI transfers look like? What was the overall throughput of the SPI bus? Was this arrangement going to make it easier for the application to accomplish all the communication tasks it had, quickly enough to satisfy the soft real-time requirements of the robot?
In an embedded system, one does not have to get very far down into the code before software tools become cumbersome in determining the real-time performance of the system. Gauging the efficacy of the SPI device driver at the level of clock cycles, individual transfers, and interrupt service latency solely with software tools would be like determining a person's facial features using a bowling ball as a probe.
Fortunately, much of the hardware outside the SBC was of our own design. The signals of the SPI bus were easily accessible to a logic analyzer, which handed us measurements on a platter. After adding some "instrumentation" in the form of toggling digital outputs at critical points in the application and the driver, hard evidence showed that the application was performing transactions with the peripherals through the device driver, and the driver was interleaving those transactions on the SPI bus almost as expected.
Peripherals with a microcontroller of their own required a delay between transfers to allow the microcontroller to service their transfer-complete interrupt and prepare for the next transfer. This delay is typically 200 microseconds. This magnitude of delay, we reasoned, should be an easy matter for the application, in which the involved thread would sleep after initiating a transfer for however long the peripheral needed, freeing up the main CPU for other things, waking up after the delay to begin the next transfer.
We therefore expected to see a transfer with one peripheral, then a 200-microsecond delay during which other transfers could occur on the SPI bus, then another transfer with the peripheral.
What we found instead were delays on the order of 10 milliseconds! Not only did this completely go against our intention of dense, bursty utilization of the bus, but it also made transactions which ought to take 2 milliseconds (approximately ten transfers separated by a delay of 200 microseconds) take over 100ms to complete.
While it's true that our utilization of the CPU had greatly improved, the throughput of the system was abysmal; at some point, the application still had to wait for communication with peripherals to finish to complete its processing, and this could not happen and meet even the soft real-time needs.
What was causing this long delay, almost two magnitudes greater than we had intended? How could a usleep(200) call (theoretically putting a thread to sleep for 200us) put off the task for a whole 10ms?
As with previous device driver work, assistance was waiting in the typical places where one finds help when working with open-source software. In this case, it was in the kernel source, along with, yet again, Rubini and Corbet. The answer lied in granularity of timing of the kernel.
The kernel's main timer runs, by default, at 100Hz on a typical x86 system. This value is determined by a macro, aptly named HZ, which is defined inside include/asm-i386/param.h for an x86 target. The kernel timer, with its 10ms period, is responsible for much of the timing of the system, including task switching. It became clear that the best granularity we could hope for, regardless of the value we passed to usleep(), was 10ms unless the definition of this macro was changed.
To prove the significance of this setting, we changed the value of
HZ to 1000, recompiled and installed the kernel, and ran the test
again. The delays decreased by a factor of ten. Granularity of 1ms was
better, but not good enough to achieve 200us timing between transfers.
The bus was still strangled providing only one-fifth the required
throughput.
In a surprise discovery, we solved the problem another way using gettimeofday(),which interestingly provides the date and time down to the microsecond. A busy-wait loop never giving up the CPU for other threads, was patently unacceptable. Implementing and testing a busy-wait loop.>
However, in implementing and testing this loop, the surprise was that gettimeofday() did in fact release the CPU to other threads in a way that seemed to allow the thread to run again before the beginning of the next timer interval. Digging into the kernel source code confirmed that it did in fact yield to the processor.
The new delay not only gave the fine timing needed, but satisfied the requirements of allowing multithreaded operation even during multibyte transactions on the SPI bus. Even though HZ no longer needed to be set to 1000 for SPI timing, the value was kept in our kernel to provide 1ms-grained delays using usleep() throughout the rest of the application.
One may wonder about the about the run-time consequences of multiplying by ten the duty cycle of the system timer. In the final analysis, this change appeared to have an insignificant effect on CPU loading, and an immeasurable effect on the run-time characteristics of our system. This did not come as much of a surprise, considering the eminent efficiency of the Linux kernel timer code, and the speed of the CPU.
That said, increasing HZ was not without consequence. At some point long after the SPI timing problem was consigned to the pages of engineering notebooks, a new WiFi card was installed. Poor network performance and frequent network failures suddenly ensued.
We traced the problem down to timeouts in the device driver for the new interface, which, while dependent upon the kernel timer, were not expressed in the source code in terms of HZ. As a result, the timeouts were expiring in one-tenth the time they would have before HZ was increased. By changing HZ to improve the Linux kernel timing granularity, we had broken the device driver. Repairing the driver was simply a matter of correcting the length of the timeouts, and recompiling the module.
The experience taught us three lessons.
First, such a fundamental change as HZ can and often will have far-reaching effects, some subtle and misleading. While neither new to us nor unique to Linux, it was important for us to receive this lesson.
Second, we saw once again that digging into the kernel source code is not only allowed, but also encouraged (as shown by our success). In this case, we saw an error message on the console when we tried to use the device. By searching the kernel source tree for this error message, we were able to locate the driver source file and zero in on the problem relatively quickly.
Third, we learned that in writing kernel modules, clear, concise, unique run-time error messages are vital to drilling in on the cause of a problem: given a cryptic, non-unique error message in the case of the driver-timing problem, it would have taken far longer to get to the cause of the problem, were we able to find it in a reasonable amount of time at all.
Dealing with SPI transfer timing
For SPI transfer timing, we could have built precise delays right into
the SPI device driver to provide the right amount of time between
transfers. This was never a viable option.
First, looping for delay in the driver goes against the intent of interrupt-driven design, whose beauty lies in its ability to initiate a transfer then return right back to the application, to gather the results of the transfer through an interrupt, again consuming minimal CPU cycles. Waiting in the driver would rob the application of these CPU cycles.
Second, doing so would violate Rubini & Corbet's separation of concerns (mechanism and policy), unduly forcing upon the driver protocol-related constraints regarding how multiple transfers are to be carried out. The driver's job is to serve the interface to the application in a simple, straightforward way; it's the application's job to worry about the interface's use and purpose.
The astute reader will notice that there exist multiple answers to questions of implementation, mostly in the form of kernel patches, allowing real-time extensions to Linux. These extensions take many forms, and promise millisecond or microsecond granularity, some without busy-wait loops. Also, our processor has fine-grained timing hardware built in, which may help solve this problem.
In the end, the gettimeofday()-loop approach was chosen because it required no change to the kernel, and because it was easy to implement. There was also some comfort inherent in staying with a stock kernel as much as possible. Finally, time pressure on the project forced a quick resolution. Implementing this delay completely in the application also provided flexibility to change the approach easily whenever needed.
As the Linux kernel continues to improve with each release, we will likely revisit and improve upon our solution to this timing problem. However, the current solution works and meets today's needs. When a better solution is required, we will once again ply the market and the Internet for solutions to this real-time problem. With our embedded Linux experience deeper now than when we first wrote of this problem in our notebooks, we may be able to arrive at a more elegant solution.
Overall impressions
You get what you pay for. This phrase applies almost universally, but
as with statements like it, context is significant. What are you paying
for? A product that works as expected. What are you paying? To an
embedded systems engineer, the currencies are money and time. Both are
easy to spend. In a perfect world, you spend one to save the other. In
reality, that's a neat trick.
Linux is free only if your time has no value. [4] Applying the perfect-world time/money model, what you gain by not paying money for Linux is lost in the time you spend bending it to your will. The equation is the same, but the coefficients change, depending on the objective, and on the person, organization, or culture involved in the effort. How "free" Linux is depends on the value that entity places on its money and time.
No matter what the project, technical support is required. Whether that support is paid for, or is simply gathered from experience, it has to be paid for with money or time. From our experience with Linux we don't feel that we're missing anything by not having technical support only a phone call away.
The return on the time investment has been positive. The framers of the C Programming Language said that C "wears well as one's experience with it grows." [5] This is true of almost every tool that people use. Linux is no exception; as we continue to apply it to the control of the robot, our competence and comfort level with it increases.
It's unlikely that we would have been so willing to port this product to Linux without a high level of understanding and experience with it. This statement is meant less as advice as it is an assessment of our sense of caution. A much more adventuresome reader may jump headfirst into such a task after having gained quite a bit less experience, and will likely be successful, having made the commitment to devote time and effort to research and learn, and to solve problems as they arise.
Our effort to bring to life the new generation of autonomous floor scrubber took about 22 months. Of that, two software engineers spent about 2 months focused on device driver development and about 13 months fine-tuning the application to take advantage of Linux and run on the new hardware with several new peripherals.
The cost in time yielded a tremendous gain of knowledge that may not have been gained through traditional technical support models. In retrospect, the only change that we would recommend would be to have more engineers from the start, but wouldn't we all want that?
Conclusion
Linux is not the right tool for every job; there will always be
problems that are best solved by an 8-bit microcontroller running 8k of
carefully crafted C or assembly language. Systems with stringent,
microsecond-level timing requirements also may not be a good fit for
Linux, either. As embedded systems become more complex, and hardware
becomes more capable, the sphere of sensible embedded applications of
Linux grows.
However, for our robot, Linux was and is an excellent choice. It provides all the flexibility one expects from open-source software, meets the current real-time demands of the system, and leverages our Linux experience well. When problems appear, answers are forthcoming through one or more of the many support channels. In most cases, the problems are solved more quickly than would be the case were a vendor's Technical Support department involved.
Looking back over the journey we've taken, it is thrilling to see that the effort of porting to Linux has been a success; the robot works as desired and is capable of beingimproved upon for a long time.
The Linux foundation is so comfortable to us there is no consideration of moving the code to another RTOS -- yet, we have the flexibility of quickly moving to another hardware platform. If your next embedded application qualifies, we encourage you to consider using Linux.
David Knuth is director of engineering and Daniel Daly is senior software engineer at Intellibot Robotics, LLC
This article is excerpted from a paper of the same name presented at the Embedded Systems Conference Silicon Valley 2006. Used with permission of the Embedded Systems Conference. For more information, please visit www.embedded.com/esc/sv.
References
[1] Rubini, A. & Corbet, J. (2001). Linux Device Drivers (2nd Ed.).
O'Reilly, p. 1
[2] Rubini, A. & Corbet, J. (2001). Linux Device Drivers (2nd Ed.).
O'Reilly, p. 16
[3] Rubini, A. & Corbet, J. (2001). Linux Device Drivers (2nd Ed.).
O'Reilly, p. 2
[4] Zawinski, J. (1998, 2000). Mouthing Off About Linux. http://www.jwz.org/doc/linux.html
[5] Kernighan, B. & Ritchie, D. (1988). The C Programming Language
(2nd Ed.). Prentice Hall, p. x
[6} The Linux FAQ, linux.org, various entries.
[7} The Linux source code tree, http://www.kernel.org/,
various entries