Robot Control and Colbert

Robot control programs take a robot's sensory input, process it, and decide which motor actions the robot will perform. Our authors use the Colbert sequencer language to develop control code.


November 01, 2000
URL:http://www.drdobbs.com/embedded-systems/robot-control-and-colbert/184404316

Nov00:

Kurt works at SRI's Artificial Intelligence Center and can be contacted at konolige@ ai.sri.com. Jeanne and William work for ActivMedia Robotics. They can be contacted through http://www.activmedia.com/.


As in the animal kingdom, a robot's ability to avoid collisions and track place in space depends on whether the machine can sense the world and act within it. The simplest sensor an entity (human or machine) can have is "proprioception" -- feedback about its own movements. Without proprioception, an entity has no way to guess how its position has changed. For instance, humans who have lost the ability to feel muscle extension or compression have to watch their arms to determine where they are in space. This feedback has to be built for machines such as robots, however.

For instance, the AmigoBOT (shown in Figure 1) from ActivMedia (http://www.activmedia.com/), gains motion feedback through two 500-tick motor encoders, one on each wheel. Motor encoders measure wheel rotation; the more ticks, the more precise the measurement. A 500-tick encoder can measure wheel movement as fine as 0.01 millimeter on the AmigoBOT.

One basic use for motion feedback is to make robot movements smooth and controlled. The AmigoBOT can be commanded to move at a certain velocity, and the onboard microcontroller tries to maintain that exact velocity, whether the robot is traveling uphill, goes over a bump, or is carrying a heavy load. A proportional-differential (PD) control algorithm, running at 100 Hz, makes it easy to control the robot in a smooth and efficient manner.

By integrating encoder information from the left and right drive wheels, AmigoBOT can keep track of where it has traveled from an initial position: This technique is traditionally called "dead reckoning." With time, wheel slippage and other errors cause the dead-reckoned position to differ significantly from the actual position of the robot. Think about walking down a hallway and closing your eyes. For the first few seconds you're pretty sure about where you are, but you shortly begin worrying about bumping into the walls. Robots can be much more accurate than people, but still have to use external sensors to correct the dead-reckoned position after moving around a bit.

Some small robots use bump or pressure sensors -- often a ring around the robot that can be triggered from any direction. These simple sensors let robot programs respond intelligently to an amoeba-like view of the world. But not everyone wants a robot running into their feet, even if you do program it to apologize profusely and back away.

To avoid collisions, many small robots include inexpensive infrared (IR) range-finding sensors that send out a beam of IR energy and detect the return. There are two types of IR sensors:

Energy sensors are useful for simple detection of close (<30 cm) objects, while triangulation sensors can give more precise estimates of object range out to 80 cm. But, 80 cm is a limited range when you're trying to map a 30-meter room (try moving around while looking only at a small circle around your feet). Also, the IR beam is very narrow, so that it misses a lot of small objects -- chair legs, for instance.

For this reason, AmigoBOT is surrounded by eight sonar (ultrasonic) transducers, rather than IRs. Sonar transducers emit a short ultrasonic pulse, and calculate range to the nearest object based on the time of flight of the returned echo. Sonar sensors are good to use because they are small, low-power, and accurate. Under good conditions, a sonar sensor like the Polaroid 6100 can detect objects out to 15 meters, with a relative accuracy of an inch. And unlike IR sensors, which have a very narrow beam, a typical sonar device sends out energy in a cone from 20 to 30 degrees in width. Thus, by placing a small number of such sensors together in a fan, you can cover a large detection angle. Since the robot mostly moves forward, we place six of the sonars in the forward field where they can do the most good. Chair legs, walls, poles, childrens' toys -- the sonars can see them all. Plus, we reserve two sonars for the rear of the robot, the "eyes in the back of the head," so it can be more intelligent when it backs up.

While the dispersion cone of the sonars is useful for getting large coverage, it isn't quite so good when you're trying to use the sonars to keep localized against a map. That's because it's impossible to tell where exactly the echo comes from -- it could be anywhere within the cone. For instance, if the robot is far enough away from a narrow doorway, it can't see through it, because any sonar cone will hit one side or the other. Localization programs compensate for this uncertainty in sonar sensing by using probabilistic algorithms, which figure out the most likely position for the robot by comparing its sonar readings against a known map of the environment.

Mapping Sense into Motion

How do programmers design robot programs? In general, robot control programs take the robot's sensory input, process it, and decide what motor actions the robot will perform. But the mapping between inputs and outputs is complex, and the control task requires decomposition into simpler elements to make it workable. In recent years, there has been some convergence on an architecture for autonomous mobile robots; see Figure 2.

The bottom layer is a controller that implements some form of motion control for the robot. This layer can be quite complex; for example, in AmigoBOT, the architecture of the Saphira robotics development environment (developed at SRI's Artificial Intelligence Center; see http:// www.ai.sri.com/~konolige/saphira/) consists of a fuzzy controller that implements a set of behaviors for achieving goals such as corridor following, obstacle avoidance, and the like. The second layer is a sequencer that initiates and monitors behaviors, taking care of the temporal aspects of coordinating behaviors, such as deciding when a behavior has completed a job, or is no longer contributing to an overall goal, or when environmental conditions have changed enough to warrant different behaviors. The sequencer must complete its computations in a timely manner, although not as quickly as the control layer. In the top layer, long-term deliberative planning takes place, with the results being passed down to the sequencing layer for execution. Generally, the planner is invoked and guided by conditions in the sequencing layer; for example, a task failing or completing.

The sequencer plays the role of the main executive, taking advice from the planner and invoking behaviors to accomplish goals. When you think of writing robot programs, it is sequencer programs that are the result. In fact, it's possible to think of the planner as an automatic generator of robot programs, which are then executed by the sequencer. In AmigoBOT, the sequencer language is Colbert, which is part of the Saphira architecture. Colbert is an interpretive development environment within Saphira. The examples we present here can be tried out using AmigoColbert on the AmigoBOT itself, or in the Colbert window at the bottom of the Saphira simulator display. The Saphira simulator can be freely downloaded at http:// mobilerobots.com/amigo/.

Things Go Better with Finite State Automata

Colbert draws on two sources for its concepts. The first is finite-state automata (FSA). FSAs are ubiquitous in computers and robotics, because they provide a way of defining a mapping between the internal state of an automaton and its operation in the world. When you drop coins into a soda machine, its internal state changes, until it gets to a state in which you've paid enough; then it drops a soda. FSAs are a great way to encode procedural knowledge -- knowledge of how to achieve some goal. This is especially true when the procedure includes conditional actions that must test the state of the environment to make a decision about which action to perform next. In Colbert, a program is an activity with semantics based on FSAs.

Walk and Chew Gum At the Same Time?

Complex robot control problems are often best decomposed into sets of concurrent processes that communicate and coordinate activity. In Colbert, a set of activities executes concurrently to achieve a goal. Activities have a hierarchical structure (one activity can spawn another, and is its parent). Activities communicate through a global data store, and by sending each other signals.

Colbert is a subset of ANSI C, along with a few extensions for robot control. Even though the semantics are based on FSAs, there is relatively little that C programmers must learn to begin writing AmigoBOT control programs. The Colbert evaluator executes source language statements directly, so that programs can be modified during execution. Errors are signaled and users can examine the state of the system, make changes to programs, and continue with the changed program. The evaluator also lets users probe the state of the system during execution to determine where errors occur and to load compiled C code for efficient execution of compute-intensive routines as part of an activity.

Activities control the overall behavior of the AmigoBOT in several ways:

Patrol Example

Assume you want the robot to patrol up and down between two goal points, repeating this activity a specified number of times. The basic actions the robot can perform are: 1. turning to a heading; and 2. moving forward a given distance. For this example, we won't worry about the problem of localization.

The simplest way to realize the patrol activity is as a perpetual while loop, in which the primitive turn and forward motion actions are executed in sequence.

On the AmigoBOT:

Enter the program from Example 1 into the AmigoCOLBERT window of AmigoEYES.

This example illustrates three basic capabilities of the Colbert control language. First, the two basic actions of turning and moving forward are sequenced within the body of the while loop. As each action is initiated, an internal monitor takes over, halting the further execution of the patrol activity until the action is completed. So, under the guidance of this activity, the robot turns to face the 180-degree direction, then moves forward 1000 mm, then turns to the 0-degree direction, then moves forward another 1000 mm. The net effect is to move the robot back and forth between two points 1 meter apart.

The enclosing while loop controls how many times the patrol motion is performed. The local variable a is a parameter to the activity; when the activity is invoked, for example with the call start patrol(4), this value is filled in with an integer. On every iteration, the while condition checks whether a has been set to 0; if not, the variable is decremented and the loop continues. (To make this an almost infinite loop, just invoke patrol with a negative argument.) Using the variable a to keep track of the number of times the movement is performed illustrates the capability of checking and setting internal variables, which can be handy even for simple activities.

Surveillance Example

While sequencing basic actions is the typical evaluation mode, the language also supports concurrent execution, in which several activities working in parallel coordinate the robot's actions. Suppose you want to program the robot to patrol until it sees some object; then it should stop patrolling and approach the object. To accomplish this task, we'll set up two activities: the patrolling activity of the previous example, and a supervisory activity approach that checks if there is something in front of the robot, and if so, approaches it.

This activity starts off by invoking patrol (see Example 2) with a negative argument so it continues indefinitely. However, instead of causing the approach to wait for its completion, the patrol activity is invoked with two special parameters. The first, timeout 300, causes patrol to quit after 30 seconds (300 cycles) have elapsed. The second, noblock, lets the execution of approach continue in parallel with patrol. The former now goes into a monitoring loop, in which it checks for objects in front, for a motor stall, and for the state of the patrol. If it determines that patrol has timed out, or if a motor stalls (indicating the robot ran into something immovable), then approach exits in a failure state. The activity executive keeps track of the dependencies among activities; in this case, since approach called patrol, exiting approach automatically exits patrol. Thus, if the motor stalls, all activity started by approach is suspended.

If, on the other hand, approach determines that there is an object less than 2 meters in front (by calling the perceptual routine sfObjInFront, which returns the distance to the nearest object), then it suspends the patrol, and moves to within 20 centimeters of the object. The patrol activity must be suspended, otherwise the Move action conflicts with the actions being issued by patrol. After the robot moves near the object, the approach activity exits with the success state.

In this example, two activities execute concurrently and coordination is achieved by signals that are sent between them. Activities can examine each others' states, and take appropriate action. As the monitoring activity, approach has the responsibility of checking the state of patrol to see if it has timed out, and also checking for other conditions that would cause the suspension of patrol and the initiation of new behavior. Finally, if approach is itself part of a larger activity, then by exiting with success or failure, it can signal other activities of its result.

A More Elaborate Patrol Example

The use of a C-like language, together with concurrent finite-state semantics, makes it easy to write complex control routines in a few simple lines.

Example 3 is the FSA for the patrol activity. The first thing to note about the FSA is that its states don't correspond exactly to the statements in the activity. For example, the while statement has been translated into a set of nodes (start, done, c) that split the condition of the loop. In general, conditional and looping statements in Colbert translate to a set of nodes with conditional labels in the FSA.

Actions at the nodes include primitive robot actions and internal state changes. In pure FSAs, all state information is encoded in the states themselves. For Colbert, the nodes represent only the state of the activity; other robot state information is handled separately (and more efficiently) as part of the Saphira perceptual space. In the activity schema, no wait conditions for primitive actions were given explicitly. In the FSA, these conditions are given as the conditions for transition to the next state. When an action command is issued, the FSA waits in the issuing state until the action is finished. This default translation can be changed by the addition of the noblock and timeout parameters in Colbert.

The output function associated with a node is performed only once, when control first arrives at the node. All self-transitions back to the node (which are not explicitly illustrated in the example) do not result in the output function being called again. The strength of the Colbert language lies in the ability to make an intuitive translation from operator constructs in C to FSAs capable of controlling the robot. FSAs can be tedious to program directly, because the straightforward sequences and loops typical of most programs translate into lengthy sets of nodes and arcs with a linear or looping structure. Consider trying to write in C, where after each statement you have to say which statement is next. In addition, common FSA constructs, like waiting for actions to finish, can be assumed implicitly as part of the semantics of Colbert, rather than written explicitly in the construction of the FSA.

Colbert Statements

Table 1 lists the four categories of Colbert statements. We've already examined examples from each of the categories in the patrol and approach activities. The sequencing and internal state actions comprise the Standard C portion of the language. C assignments and function calls have their normal interpretation, changing the state of internal C variables. Sequencing actions, which include typical C iteration operators, are translated into a set of FSA states with appropriate branches, as in the while statement of the patrol activity.

Control actions translate to a single FSA node for executing the action and a transition based on the completion of the activity or action. If noblock is asserted, then the transition is taken immediately; if a timeout is asserted, then there is an additional transition based on the timeout value. Control actions can also change the state of other activities, by sending them signals. Similarly, an activity can accept signals from other activities, changing the state of the underlying FSA.

Activity state tests aren't statements per se, but they are expressions that can be used where C expressions are normally allowed. They allow conditionals to check for the state of another activity.

Finally, activities can modify the state of their execution using various sequencing operators: goto, iteration, conditional, and suspension operators.

Conclusion

Robot control programs take a robot's sensory input, process it, and decide which motor actions the robot will perform. Finite State Automata provide a means to define a map between the internal state of an automaton and its operation in the world. Using Colbert on the ActivMedia AmigoBOT or in the Saphira simulator, you can get a feel for what it's like to create programs that meet the exigencies of the rough-and-tumble real world.

References

Arkin, R.C. "Integrating Behavioral, Perceptual and World Knowledge in Reactive Navigation." Robotics and Autonomous Systems, June 1990.

Connell, J. "SSS: A Hybrid Architecture Applied to Robot Navigation." Proceedings of the IEEE Conference on Robotics and Automation, 1992.

Gat, E. "Integrating Planning And Reacting In A Heterogeneous Asynchronous Architecture for Controlling Real-World Mobile Robots." Proceedings of the AAAI Conference, 1992.

Hopcroft, J.E., and J.D. Ullman. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, 1979.

Kaelbling, L., and S. Rosenschein. "Action and Planning In Embedded Agents." Robotics and Autonomous Systems, June 1990.

Konolige, K. "COLBERT: A Language for Reactive Control in Saphira." German Conference on AI, September 1997.

Konolige, K. and K. Myers, A. Saffiotti and E. Ruspini. "The Saphira Architecture: A Design For Autonomy." Journal of Experimental and Theoretical Artificial Intelligence, September, 1997.

Myers, K.L. "A Procedural Knowledge Approach To Task-level Control." Proceedings of the Third International Conference on AI Planning Systems, AAAI Press, 1996.

Payton, D.W., J.K. Rosenblatt, and D.M. Keirsey. "Plan Guided Reaction." IEEE Transactions On Systems, Man, and Cybernetics, June 1990.

Wilkins, D.E. and K.L. Myers. "A Common Knowledge Representation For Plan Generation And Reactive Execution." Journal of Logic and Computation, 1995.

DDJ

Nov00:


 act patrol(nit a)
 {
   while (a != 0)
   {
     a = a-1;
     Turnto(180);
     Move(1000);
     Turnto(0);
     Move(1000);
   }
 } 

Example 1: A simple patrol activity.

Nov00:


 act approach()
 {
   int x; 
   start patrol(-1) timeout 300 noblock;
  checking: 
   if (timedout(patrol) || sfIsStalled()) fail;
   x = sfObjInFront();
   if (x > 2000) goto checking;
   suspend patrol;
   Move(x - 200);
   succeed;
 } 

Example 2: An activity that monitors another.

Nov00:

Example 3: FSA for patrol.

Nov00:

Figure 1: The ActivMedia AmigoBOT.

Nov00:

Figure 2: A hybrid control architecture.

Nov00:

Table 1: Colbert statements: (a) control actions; (b) activity state tests; (c) internal state actions; (d) sequencing actions.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.