Everett is president and senior software developer for Taygeta Scientific and president of the Forth Interest Group. He can be contacted at firstname.lastname@example.org or http://www.taygeta.com/.
Designing and building autonomous robots presents a host of technical challenges. On the hardware side, for instance, you have to contend with computational and interface electronics, torques, pulleys, gears, and levers. On the software side, the challenges involve real-time data acquisition, real-time process control, artificial intelligence, and other issues familiar to embedded-systems designers. In this article, I'll describe the high-level processing I implemented in a robot designed for researching robot-control systems. (The source code implementing this processing is available electronically; see "Availability," page 3.)
Since the purpose of this robot was to examine the control problems at a relatively high level, you may wonder why I went to the trouble of building a physical robot instead of just developing the necessary control techniques through simulation. The main reason for building the device is that it is extremely difficult to honestly simulate the effects that occur in a real, uncontrolled environment: Infrared sensors can give false positives or negatives, the sonar system can see false echoes (or the sound gets absorbed instead of reflected by an object), motors can stall, power supplies sag, and walking robots can tumble when the center of gravity shifts. All these effects can be a challenge to realistically simulate. Consequently, simulations are necessarily idealistic, and so, to quote R.A. Brooks and M.J. Mataric, "Simulations are doomed to succeed" (Real Robots, Real Learning Problems," Robot Learning, edited by J.H. Connell and S. Mahadevan, Kluwer Academic Publishing, 1993). Of course there is an even more important reason to actually build a robot -- it's fun.
Anatomy of a Robot
The robot I built is a six-legged walking robot approximately 22 inches long and 10 inches wide, with 7-inch long legs. Its sensors consist of a pair each of ranging ultrasonic sonars, active infrared (IR) light detectors for proximity detection, and contact sensor whiskers about 8 inches long. The sonar works in the range of about 9 inches to 30 feet, and is used for long-distance object detection and mapping. The IR system is tuned to detect objects from about 14 inches away. The whiskers detect actual contact of the robot with an object.
All sensors are mounted on a sensor head that can move +/- 45 degrees to either side, as well as up and down. Putting the sensors on the head eliminates the need for expensive rings of sensors that are common on autonomous robots, but adds the complication of having to know where the head is pointing in order to provide a proper context for the sensor data.
In theory, the sonar system can pick out an obstacle before the IR notices it and both of these should report it before the contact sensor detects it. This redundancy improves the reliability of the robot since each of these object-detection mechanisms uses a different physical principle, making it less likely that they would all fail to see something.
Building a walking robot is considerably more challenging than I originally anticipated. After several frustrating weeks, I learned the techniques required to get a statically stable robot (as opposed to a dynamically stable robot which needs to actively balance itself) to successfully walk. S.M. Song and K.J. Waldron's Machines that Walk: The Adaptive Suspension Vehicle (MIT Press, 1989) is an excellent source of information on motion control for this kind of robot. After experimentation, I developed a library of possible gaits for the robot. Each gait consists of a sequence of (what I call) cycles for each of the six legs in a particular order. A cycle consists of:
- Raising the leg.
- Swinging it forward or backward a certain amount.
- Lowering the leg back to the ground.
- Pushing or pulling the leg back to the centered position again.
By combining variations in forward and backward cycles, and in the order of the sequence of legs being moved, the robot can be made to walk forward, backward, turn in either direction, and handle variations in the slope or roughness of the terrain. Song and Waldron describe some very elaborate gaits for doing things such as negotiating steps, crossing trenches, and moving over fences.
There are two major categories for walking-machine gaits: periodic and nonperiodic. Nonperiodic gaits are useful in complicated terrain and require foot contact and pressure feedback to be useful. Periodic gates always repeat the sequence of cycles and are easier to implement.
The gaits I generally use with the robot are backward-wave gaits and equal-phase gaits. Wave gaits have the best stability margin and are commonly used by animals. In backward-wave gaits, the phase of the leg placement sequence on each side runs from the front leg to the rear (there is also a forward-wave gait that runs the other way). Equal-phase gaits are not common in nature, but for robots they can be very useful. These gaits distribute the placing and lifting of the legs evenly over the complete motion cycle, which means that the power demand to drive the legs is relatively even. As a result, equal-phase gaits are very popular in walking robots that are hydraulically or pneumatically driven.
Two of the gaits in the library that I use for walking forward are the backward-wave gait sequence (1, 2, 3, 4, 5, 6) and the equal-phase gait sequence (1, 4, 5, 2, 3, 6). By convention, walking-machine legs are numbered from front to back, with odd numbered legs on the left and even numbered legs on the right. These particular sequences do a complete cycle of one leg before starting the cycle for the next leg. Some gaits have overlapping timing of the cycles. In these, the robot can move more quickly, but is less stable.
The robot has a hierarchical design. At the very low levels, I used dedicated microcontrollers to control functions like generating the real-time PCM signal to control the leg servo motors (this is done with an MC146805K1 processor for each leg) and modulating the IR and debouncing the contact switches (this is done with a Basic Stamp PIC controller).
High-level decisions and evaluations of the robot's environment required a microprocessor. The high-level system I used is a single-board MC68332-based system from New Micros Inc. In this article, I'll primarily focus on the software that resides on this system.
Since the robot is designed to be autonomous, there is no special user interface. You communicate with the master CPU via its built-in RS-232 interface and interact with its on-board Forth interpreter/compiler. The Forth system acts as both an operating system and programming language for the robot. Forth is nice for developing robot-control software because it can be incrementally compiled and tested in small parts. For example, to determine how high to step while walking, I mounted the robot on its test stand (a camera tripod) and typed in the command to select one leg, then typed the command to raise it until it looked right. Having a 32-bit processor like the 68332 as the master CPU was a joy. Not only is the 68332 a useful processor for controlling devices (because of the large number and flexibility of its I/O lines), but it has the computational power to do sophisticated processing and still satisfy the real-time needs of controlling a robot.
The Motorola 68332
The short description of the Motorola 32bit 68332 microcontroller is that it is an updated 68000 processor with several integrated coprocessors for memory management and I/O support. The two most important coprocessors are the Queued Serial Module (the QSM) and the Time Processor Unit (the TPU).
The QSM provides a pair of asynchronous serial I/O lines and several SPI synchronous I/O lines. Using the asynchronous I/O is much like having a UART on chip: You set a few configuration registers, which control such things as the baud rate, parity, and so on, and then serial I/O can be run in the background with no need for direct management from the CPU. The SPI section of the QSM implements Motorola's bit-synchronous I/O system. This is a three-wire (Master-In, MasterOut, and Clock) system that is supported by many devices such as A/D chips. There are significantly more registers to set in order to use SPI, because of the large variety of possible configurations. The data-transfer protocol is different, but conceptually, the configuration mechanism is same as the asynchronous I/O: You configure the subsystem by setting the registers and then the I/O occurs in the background. Without adding external circuitry, up to four SPI devices can be used; a little bit of select decoding logic allows 16 devices to be used. For more details on the QSM, see "Time for the 68332," by Eric McRae (DDJ, January 1995).
The TPU is an extremely powerful I/O subsystem. It provides up to 16 I/O lines running up to 10 different I/O functions. Many of these functions involve timing in one form or another. The TPU is a bit intimidating to learn to use. It takes writing half-a-dozen bytes just to set up to do a simple toggle of an I/O bit. But you can also write half-a-dozen bytes to set up period measurement with additional transition detection, or a stepper-motor controller with programmable acceleration and deceleration. Using the interpreter and incremental compilation features of Forth on the New Micros single-board 68332 system makes learning to use the TPU much easier. The TPU is described in more detail in "Inside Motorola's TPU," by Richard Soja (DDJ, December 1996).
Since the control software for a robot operates on many different levels, it needs to be well factored, or it quickly becomes unmanageable. For small systems, a finite-state machine is a natural way to partition the problem. This is a conceptualization of the system as always being in one of several states. The system makes well-defined moves from one state to another in response to inputs or events. For example, consider the state machine in Figure 1 that describes how the robot will respond to bumping into something. In this machine the states are:
- BUMP_SCAN when there is no contact.
- LEFT_BUMP when the left whisker makes contact (by design, the robot treats the situation where both whiskers make contact as a left contact).
- RIGHT_BUMP when the right whisker makes contact.
There is an additional state, END_BUMP, which the robot enters when a left or right contact ends. The END_BUMP state exits back to BUMP_SCAN after a timeout period has expired. This gives the robot the time to turn away from the obstruction. In the strictest computer-science definition of a finite-state machine, timer events (like timeouts) are not allowed; augmented finite-state machines that do recognize timer events, however, are common in process-control systems.
The IsoMax system
The New Micros IsoMax system is a finite-state-machine engine that sits on top of a Forth compiler. Typically you write the low-level control code in either assembler or Forth, and the high-level system as a set of interacting state machines. IsoMax internally provides the glue to drive the system through the states and to run all the state machines. You can then concentrate on describing the application specifics and not worry about building the state-machine engine itself.
Using IsoMax is straightforward. You basically write a specification of all the state machines. There are four fundamental sections of an IsoMax state-machine specification:
- The list of states that apply to a given state machine. Example 1(a) defines the machine PROXIMITY to consist of five different states.
- For each state, give the conditions that cause a transition to another state. Example 1(b) sets up the response of the PROXIMITY machine when it is in the state IR_SCAN. If the word BOTH_IR? returns a True condition, then the sequence of Forth words between CAUSES and THEN-STATE run (which causes the robot to back up). After this, the state switches to BOTH_IR. The machine can have as many of these clauses as necessary. Only the conditions that must be acted upon need to be listed in the IN-STATE...TO-HAPPEN clauses; all other conditions leave the machine in the current state.
- Next, all the defined state machines are linked into a single list of machines, which make up a composite-state machine that is to be run (Example 1(c)). In IsoMax, no priority is implied by the order of the machines in this list. Priorities or subsumption overrides must be built into the supporting words. In my robot, the state transitions cause a word that I wrote, ARBITRATE, to be called either directly or indirectly. This simple word chooses one of several possible actions (set by each state machine) on a priority basis.
- Finally, before actually running the machine chain, the initial state of each of the state machines must be defined; see Example 1(d).
Implementing a complete robot-control system as a single finite-state machine rapidly gets unwieldy as new systems are added. This is especially true for research robots that may have radically different sets of sensors and mechanical systems each time they are powered up. A more manageable approach is to create multiple finite-state machines. You implement each subsystem as a finite-state machine, then set up an arrangement that clearly defines how these separate state machines will interact.
One approach to defining how the state machines should interact is the subsumption architecture. The subsumption architecture was proposed for robotic control applications by R.A. Brooks in "A Robust Layered Control System for a Mobile Robot" (IEEE Journal of Robotics and Automation, Vol. RA-2, No. 2, 1986). It has become a popular scheme for robots because it is flexible, easy to manage, scales well for complicated robots, and degrades gracefully in the case of sensor failures. It can produce surprisingly complicated behaviors from small programs, particularly when the robot is trying to simultaneously achieve multiple goals.
The idea is that each state machine uses the relevant inputs (from sensors and internal states) and continuously generates an output of control messages. Then, downstream of the control messages and before the actual hardware, is an arbitration mechanism. The arbitrator can be designed in several ways, but it can be thought of as a series of enable and inhibit nodes (similar to the way that actual neurons are interconnected; also like real neurons, it is the inhibit connections that play the major role). These nodes are interconnected in a way that ultimately results in only one of the multiple incoming control messages actually getting out to the hardware layer.
Figure 2 illustrates the state machines for this robot:
SENSORS runs all the robot sensors and sets all relevant global variables. This machine also initializes all the subsystems when the robot's "go" switch is turned on, and stops the robot when the switch is turned off. Unlike all the other state machines, SENSORS is not really part of the subsumption scheme. Instead, this machine provides a central background task that assures all the sensors are scanned. The IR and contact sensors are essentially continuously scanned. Once every six seconds, the sonar is used to look around for obstacles.
WALKER sequences through all six robot legs in the currently defined sequence. The current sequence is determined by six execution vectors, each one defining which leg to move and the type of cycle to execute for it. The default sequence is to move straight forward, but it can be overridden. This machine also has an "idle" state that is entered at the end of the sixth-leg sequence, before going back to the first leg. Usually the machine will just transit from the sixth sequence to idle to the first sequence. But a general robot-shutdown message will cause this machine to hold in the idle state until a "go" message arrives, at which point it will cycle through all the legs again (with a walking robot, it is a good idea to have a well-defined way to stop walking, or it might fall over).
EXPLORE makes the robot wander around in open space until the sonar determines that it is getting close to an obstruction. It then looks for the largest possible open region to move into. The sonar is swept across the field of view of the head in order to find out what is the direction of the largest range. The direction that EXPLORE decides to move can determine what six-leg sequence that WALKER will use.
PROXIMITY decides upon a course of action based on the IR system's detection of an obstacle in the vicinity. If an object is detected nearby, the robot turns to walk away from it. This machine can override the sequence that WALKER would get from EXPLORE.
CONTACT decides upon a course of action based on the contact whiskers' detection of an object in contact. The behavior here is similar to PROXIMITY, except that since the robot is in contact with the object, it backs up several steps before turning away from the object. This machine can override both EXPLORE and PROXIMITY.
During development and testing, you cannot just release a half-dozen asynchronous, interacting state machines and expect to see a working robot. When designing a system that contains multiple state machines, it is important that the individual machines interact with each other in a restricted and well-defined manner. This makes it easier to develop and test each subsystem, since you can work with a single state machine at a time. It also keeps the coding simpler, since the complications of semaphores or other access synchronization schemes are avoided. In this robot, for example, only the state machine WALKER actually moves the legs. The other machines communicate with WALKER by providing the desired sequences of leg movements. This communication is handled in the arbitration code. Once the individual state machines are properly running, the overall control system can be tested by adding one state machine at a time.
For all its importance, the arbitrator for managing the state machines is a small routine. It chooses one of several possible execution tokens (function pointers) and runs it. Example 2 is the entire arbitrator. This Forth word looks in turn at the execution vectors that may be set by the machines CONTACT, PROXIMITY, EXPLORE, and WALKER. The order in which the vectors are tested sets the priority order. The first non-null vector it encounters is executed, then the arbitrator exits. When run, each of these routines set the six execution vectors for the gait sequence used by WALKER, causing the robot to start walking with the leg sequence determined by the state machine with the highest priority. Under ordinary circumstances, WALK_VEC contains a pointer to the word that defines the gait to forward. This means that the default behavior of the robot is to walk forward, unless it is overridden by a high-priority state machine.
One generalization about robots that holds true is that they are always work in progress. For quite some time I was preoccupied with just getting this one to walk at all. Now I am focusing on the real-time process-control-level code that defines the robot's basic behavior (this is what I have described here).
In the future, there are higher-level problems to solve that fall into the domain of artificial intelligence. For example: Given a map of a room, how does a robot placed in an arbitrary location in that room determine where it is? Even more challenging is the question of how it locates itself without a map. Robots certainly are not boring to work with: There is always a new challenge.
Copyright © 1997, Dr. Dobb's Journal