A Task Dispatcher for Embedded Systems

Task dispatchers are small executive programs that control switches and other inputs. Ron presents a dispatcher implemented in C that's small enough to be reliably implemented in the assembly language of the destination processor.


August 01, 1999
URL:http://www.drdobbs.com/embedded-systems/a-task-dispatcher-for-embedded-systems/184411018

Most of the embedded controllers I've built over the years have been collections of actuators or indicators controlled by a variety of events. The actuators might be motors, relays, lights, or solenoids, while the events might be switch closures, a specific voltage level, or a certain frequency. In every case, there was a small executive program that was common to all and that controlled when the switch or other input was looked at, in what order, and what had to be done. This was the task dispatcher.

While the old microprocessors such as the Z-80 and 8080 required external memory and I/O chips, processors such as the AVR series from Atmel, PIC series from Microchip, and the latest H8 and H16 series from Motorola combine RAM, ROM, timers, and I/O -- all on a single chip. This makes for physically small, but nonetheless very powerful systems. The small size usually also applies to resources, such as code and RAM space, and therefore requires very efficient code. While implemented in C, the dispatcher I present here is small enough to be reliably implemented in the assembly language of the destination processor.

In the Beginning

Except for a few systems where real time is absolutely critical, most embedded-system programs have a similar structure. An interrupt-driven clock ticks away every few milliseconds and one or more tasks run at every clock tick. These tasks examine internal flags or I/O ports and make a decision based on their state. A change of input state can cause a consequent change of state for some output port, starting some external event such as a motor running or light turning on. In most cases, the task takes only a small fraction of the time between clock ticks. The input state the task is looking for changes once the output action has occurred. In the light example, the control task will now be looking for events that turn the light off.

Therefore, if you look at each task, there must be code to decide what mode it is in, what state or states are involved, and what output action is appropriate. Without some form of task dispatcher, this type of function usually consists of lots of if-then-else clauses. In effect, the task must determine everything about itself every time it runs. This raises the first point about this type of code structure:

"Too many complex if-then-else sequences as each task decides what mode it is in, what it should be looking for, and what needs to be done."

In many cases, there is also a requirement for external events to be timed in some way. For example, suppose that, with a motor, new request commands run the motor clockwise when it had earlier been commanded to run counterclockwise. The sequence of events here is different from a simple stop request. The motor must be stopped, allowed to come to rest, then restarted in the new direction. In the absence of tachometer output, this would be implemented by turning the motor off and starting a precomputed delay. When the delay timed out, the motor would restart in the opposite direction.

Another example might be a flashing alarm light. When the alarm event happens, the light turns on and a delay starts. When this delay times out, the light turns off and the first delay restarts. When that one times out, the light turns back on, regularly flashing until the alarm event goes away. Delays are typically implemented by counters in registers or RAM that are decremented at each clock tick. This raises the second point about this type of code:

"Too many timers may be required to run, requiring considerable code and processor time to set up and manage each separate timer."

To simplify how I managed tasks and timers, I needed an executive that would let one task schedule another when the state of the device it was controlling had changed. Thus, individual tasks would be considerably simplified in that they need only manage the event or events that take that particular device to the next state. It would also manage multiple software timers with as little overhead as possible. And finally, it would be as small as possible, as code space is always limited.

What's a Dispatcher?

Many years ago, I came across an Intel application note that described a simple operating system (see "Multitasking for the 8086," AP-61, July 1979). The app note discussed a dispatcher that utilized an innovative way of managing multiple timers and a simple queuing technique for managing tasks. However, because it used an awkward system of numbered tasks, I rewrote the dispatcher to use task pointers. Over the years, I have converted it to various assembly languages and for the most part it has been my executive of choice for embedded systems. Recently, I rewrote it in C as part of testing my Small-C for AVR. I'll use that implementation in this article to describe the dispatcher's functionality.

The dispatcher consists of a ready task queue dispatch manager and a delayed task queue manager. Both queues share the same data structure (refer to dispatcher.c, available electronically; see source files at the start of the article). The system is initialized by a single call to the InitMulti function. You add tasks to the tail of the ready queue by passing the name of the task entry point function to the QueTask function. Tasks are taken from the head of this queue by calls to the Dispatch function. Listing One is a typical main program.

Listing One

InitMulti();
QueTask(TaskA);
while (TRUE) {
   Dispatch();
   }

The QueTask call puts TaskA on the ready queue. When Dispatch is called, it finds TaskA on the ready queue and runs it. It does not return until TaskA is finished. Once TaskA finishes, it will move to the unlisted or idle state. However, a task can put itself back on the ready queue by either calling QueTask or ReRunMe just before it exits. Running tasks can (and often do) put other tasks on the ready queue as part of their job. From a resource allocation point of view, you are guaranteed that only one task will be running at any one time. In a complex controller, the single initial QueTask call in Listing One would be repeated for each peripheral being controlled.

Note that I am not multitasking using timeslicing. Each task is run in the order it is put on the ready queue, and each task runs until it voluntarily exits. Interrupts can occur, of course, but each task must return to allow other tasks to run.

In many embedded systems, this level of task control is all that is required. Generally, a task will execute in only a few dozen to a few hundred instructions, typically a few dozen microseconds. Many real-world events occur in time scales much longer than this, in the order of tens of milliseconds. This allows complete dynamic control with processor clock ticks from 1 to 25 milliseconds.

Running Tasks

Listing Two manages a single LED. Two tasks are required -- one looking at the event that turns the LED on, and another that looks at turning it off. The switchOn variable is set elsewhere to match the on/off switch state. Because the LED has only two states, there are only two functions required. While the switch is off, TurnLedOn continuously reruns itself via the ReRunMe function. When the switch turns on, the output port is set to turn the LED on and the TurnLedOff task scheduled. Because TurnLedOn is not rescheduled, it reverts to the idle state. At the next Dispatch call TurnLedOff runs, also rerunning itself until the switch turns off.

Listing Two

TurnLedOn(){
   if (switchOn) {
      outp(PORTB,0x01);
      QueTask(TurnLedOff);
      }
   else
      ReRunMe(0);
}
TurnLedOff() {
   if (!switchOn) {
      outp(PORTB,0x00);
      QueTask(TurnLedOn);
      }
   else
      ReRunMe(0);
}

While many real-world devices have more than two states, your code is now made up of functions that simply manage a single state, and you need only worry about events that move it from that state to another.

With, say, six devices to control and little processing required per task, an AVR-type microcontroller could be running each task every few hundred microseconds. If electrical current drawn is critical, an additional task could be included that calls the controller's sleep function. Thus the controller rips through the current set of tasks, then powers down -- perhaps for many milliseconds until the wakeup event occurs. The sleep task reschedules itself on the tail of the ready queue and the cycle repeats.

Adding a Timer Capability

In nearly every controller I've built, it was necessary to know about elapsed time -- a light needs to be on for 200 milliseconds, a motor needs to be off for two seconds before turning on again, an overload condition lasts for only 10 seconds before some action is required, and so on.

The QueDelay function provides this facility. It is called in the same way as QueTask, but with an additional delay-time parameter. It does some neat rearrangement of the other tasks in the delay queue, causing this task to remain idle and come off the delay queue after delay-time ticks of the system clock have occurred. Once taken off the delay queue, the task is automatically put on the ready queue.

Having a timer function means you can add a delay capability to the ReRunMe function. Therefore, if a function is only required to run every five clock ticks, it can conclude with ReRunMe(5);. This ReRunMe function examines the passed parameter; if zero, it simply calls QueTask. For a nonzero parameter, however, it instead calls QueDelay.

Using QueDelay, a task can make an event occur for a certain period. Listing Three, for instance, demonstrates how you control a motor that is required to run for two seconds after the occurrence of a flag, where the clock tick occurs every 25 milliseconds. When the flag occurs, the motor is started; 80 clock ticks later, StopMotor turns it off and reschedules CheckRunMotor.

Listing Three

CheckRunMotor() {
   if (flag) {
      StartMotor();
      QueDelay(StopMotor, 80);
      }
   else
      ReRunMe(0);
}

As another example, take the previous case, which simply turned an LED on and off in response to some switch. With the timer function (Listing Four), you can make the LED flash in a set pattern.

Listing Four

TurnLedOn(){
   if (switchOn) {
      Led = 1;
      LedOn();
      QueTask(TurnLedOff);
      }
   else
      ReRunMe(1);
}
TurnLedOff() {
   if (!switchOn) {
      Led = 0;
      QueTask(TurnLedOn);
      }
   else
      ReRunMe(1);
}
LedOn() {
   outp(PORTB, 0x01);
   QueDelay(LedOff, 8);
}

LedOff() {
   outp(PORTB, 0x00);
   if (Led)
      QueDelay(LedOn, 20);
}

Using the same clock tick time of 25 milliseconds, the LedOn and LedOff functions continue to cycle the Led on for 200 milliseconds and off for 500 until the TurnLedOff function eventually detects the switch off command. You can see that as soon as the Led variable is set to zero, both the LedOn and LedOff functions automatically become idle at the end of the next LedOff period. Note how the ReRunMe function called with a parameter of 1 means that the LED on/off switch flag is only examined at every 25 millisecond clock tick.

The same technique can be used for tasks such as switch debouncing. In Listing Five, assume that the clock ticks every 10 milliseconds. The input function from port PIND reads the actual pin levels of port D. The CheckSwitch function compares all port D inputs with the static variable state every clock tick. If a change occurs (that is, the Exclusive-OR is nonzero) it schedules the CheckSwState function to have another look three clock ticks or 30 milliseconds later. If the change was transient, no state changes are made. However, if they are still different, the state variable is updated with the new state and a flag set, indicating to another task that a switch has changed state. In either case, CheckSwitch is rescheduled and the cycle continues. The bit states in the state variable define whether the switch was opened or closed.

Listing Five

int state;
void CheckSwitch(void) {
   int val;
   val = inp(PIND);
   if (val ^ state)
      QueDelay(CheckSwState, 3);
   else
      ReRunMe(1);
}
void CheckSwState(void) {
   int val;
   val = inp(PIND);
   if (val ^ state) {
      state = val;
      ChangedFlag = 1;
      }
   QueDelay(CheckSwitch, 3);
}

The important points to keep in mind involve what occurs in moving from state to state and how each change can be implemented as a function. I usually sit down with pencil and paper and draw a state diagram that describes each state and how the system moves between them.

The Queue Data Structure

The dispatcher uses a single data structure in which it dynamically builds both the ready and the delay queues. As Listing Six illustrates, the task data structure has four fields per entry. Three bits are used in the status byte. If the task is on the ready queue, the ready bit will be set. If the task is on the delay queue then the delay bit will be set, with the delay field containing the number of ticks required for the delay. If the task is on either queue, the busy bit will be set. The function pointer field tpntr points to the task code and the next field points to the next entry in the queue. The task[0] entry is not used, so zero can be used as an empty flag for the queue pointers. However, the task[0].next entry is actually the delay queue head and is used to simplify the code in the DoQueDelay function.

Listing Six

struct {
   char   status;
   int    delay;
   void   (*tpntr)();
   char   next;
   } task[TOTALTASKS+1];

The ready and delay queues both have head pointers that contain the list entry number for the first task on each queue. The next field for that entry points to the next entry in the queue. The last entry will have a zero in this field. For the ready queue, there is also a tail pointer pointing to the last entry. Tasks are added to the bottom or tail entry of this queue, and taken from the top or head of the queue.

How the Ready Queue Works

When you pass an address to QueTask it searches the task structure for an empty entry. If there is no room left in the list, it returns a status of zero. Otherwise, it performs the simple linked queue operation of adding the new task to the end of the ready queue and inserting the task's details in the blank entry (there is no relationship between the order of task entries in the task structure and their order in the queues). If this new task is the only task, it will also be the head of the queue. Figure 1 illustrates the data structure after a single task has been added to the ready queue.

Figure 1: Ready queue after queuing one task called TASKA.

The status entry and returned value for this task will be set busy and ready. Figure 2 shows what happens when another task is added. You can navigate from the TaskHead pointer through the queue to the last entry via the next fields.

Figure 2: Ready queue after queuing TASKA and TASKB.

Calling QueTask doesn't actually make anything happen -- it just builds the queue and sets up the head and tail pointers. Running the task at the head of the queue must wait until the Dispatch function is called. If the TaskHead pointer is zero, then Dispatch does nothing. Otherwise, it copies TaskHead to RunningTask and relinks TaskHead to the next entry in the queue. It then calls RunTask, which runs the task via the entry's function pointer field tpntr.

Figure 3 shows what happens to the queue after the Dispatch function has been called. At this point in time, the task is running and pointed to by the RunningTask pointer. The TaskHead pointer has been updated to point to the next task on the queue.

Figure 3: Ready queue after queuing TASKA and TASKB and calling Dispatch. TASKA is now running as indicated by the RunningTask pointer. TaskHead has been relinked to the next task in the queue.

When the user task exits, control returns to RunTask, which clears the status bits and the task's next field. A task can reschedule itself before exiting by calling ReRunMe, which sets NewTask to equal RunningTask. The RunTask function then puts the just completed task back on the tail of the respective queue ready to run again. Otherwise, the queue entry for the task is cleared and that task becomes idle. The RunTask function then returns to Dispatch, which returns to the user's main program loop.

The routines that manipulate the queues are surrounded by gintoff and ginton instructions. These are Small-C extensions that turn global interrupts off and on, respectively. They are only required if you allow interrupt routines to add tasks to the queues. While possible, this is not recommended and I would suggest the interrupt set a flag that is picked up from within the main program loop that includes the call to Dispatch. If the interrupt flag is set, the corresponding task can be queued and the flag cleared, ready for next time.

The Timer Queue

One nice feature of this dispatcher is that the number of simultaneous timers is not limited, nor are there any additional overheads in supporting multiple timers. It does this by arranging the delay queue into ascending delay order such that the value of each delay includes the sum of all previous delays. This means that only the delay value at the head of the queue needs to be decremented at each clock tick to effectively decrement all other delays. For example, if you have three tasks (A, B, and C) on the queue with delays of 5, 8, and 14 milliseconds, respectively, and a system clock tick of 1 millisecond, the delay queue would look like Table 1(a). There is a processing cost in doing this, of course, but the time is expended when a task is put on the delay queue via QueDelay, a considerably more infrequent event than the operations occurring at each clock tick in the DecrementDelay function. Including the links, the initial queue will look like Table 1(b), with *DelayHead pointing at task A. Suppose now a fourth task D with a delay of 10 milliseconds is added to the queue. After adding task D the queue will look like Table 1(c).

Table 1: The timer queue (additions are in italics, changes in bold).

Delays are decremented by the controller's internal timer, which is configured to tick at a rate suitable for your application. The corresponding interrupt routine must call the DecrementDelay function in this module. In Small-C for the AVR controllers, the interrupt qualifier ensures the current C environment is saved while DecrementDelay is executed. You will need to consider this when porting the code to other compilers.

Calling QueDelay puts a new task on the delay queue. After some initial checking, this calls DoQueDelay to actually insert the new task. The local variable IntVal is set to the NewDelay value passed in by the caller and Pntr0 is set to *DelayHead, the pointer to the head of the delay queue (actually task[0].delay). The variable Pntr1 is set to zero. The while loop then moves progressively through the queue subtracting each delay from the remains of NewDelay until either the end of the queue is reached or IntVal goes negative, indicating that the sum of all the previous delays exceeds the new delay. At this point, Pntr0 contains the index where the new entry should go, and Pntr1 contains the index of the immediately preceding entry. Thus the assignments task[NewTask].next = Pntr0 and task[Pntr1].next = NewTask insert the new entry and relink the queue structure. If Pntr0 is zero, it means the end of the queue was reached and the new delay is larger than all the other delays and should therefore go on the end of the queue. If Pntr0 is not zero but equals Pntr1, then there were no delays previously listed and the next field can be set to zero. The delay setting for the new task is set to OldIntVal and the delay for the subsequent task is set to the negative of the currently negative IntVal value, making it a positive value. Finally, the function pointer is filled in, the status is set, and the NewTask and NewDelay variables are cleared.

Possible Changes and Extensions

The delay field in the list structure is defined as an int while in many systems a char may be adequate. This change saves a few bytes and requires only a few casts to implement. Likewise, the status field is not used within the module, so this could also be removed if required.

It is possible to list the same task more than once on a queue. This makes little sense and could cause problems in some systems. However, the possibility can occur in systems where the task has some global responsibility, like clearing error states, and can be called from a number of tasks. In this case, the GetNewTask function could be modified to also search for a matching task pointer and return a status flag if this should occur.

An interesting option is to add another task queue (still in the same structure) to give high- and low-priority queues. The low-priority queue works as described here and with tasks being put on the high-priority queue via an interrupt. Assume a lengthy low-priority task is running and an interrupt occurs that must schedule a high-priority task. The interrupt routine calls QueTask to add the task to the high-priority queue, then calls a new function called Preempt. If a high-priority task is already running, Preempt simply returns. Otherwise, it transfers RunningTask to PreemptedTask and calls Dispatch, which calls RunTask, which has been modified to always run tasks from the high-priority queue first and therefore runs the task added by the interrupt routine. Once this completes, it returns to RunTask, which returns to Dispatch, which restores RunningTask from PreemptedTask and then returns to the Preempt function. Preempt returns to the interrupt, which returns to the originally interrupted low-priority task. Torturous, but simple to implement.

Conclusion

The dispatcher stands alone and can be compiled into a library and simply linked as required. The only change necessary is matching the TOTALTASKS constant to suit your application. This need not, of course, match the total number of tasks, but only the maximum number that will be running at any one time. You should carefully examine what is going on in your program to arrive at this value. In a complex controller with many devices and many states, it may be worthwhile adding code to monitor the task structure during development and display in some way the maximum value NewTask reaches in GetNewTask.

Again, the complete C code for dispatcher is available electronically. Additionally, I've included a program that demonstrates multitasking. While by no means a complete real-time operating system, you will find it entirely adequate for most embedded systems.


Ron, who is currently with the CRC for Meteorology at Monash University in Melbourne, Australia, can be reached at [email protected]. Aug99: A Task Dispatcher for Embedded Systems

Figure 1: Ready queue after queuing one task called TASKA.


Copyright © 1999, Dr. Dobb's Journal
Aug99: A Task Dispatcher for Embedded Systems

Figure 2: Ready queue after queuing TASKA and TASKB.


Copyright © 1999, Dr. Dobb's Journal
Aug99: A Task Dispatcher for Embedded Systems

Figure 3: Ready queue after queuing TASKA and TASKB and calling Dispatch. TASKA is now running as indicated by the RunningTask pointer. TaskHead has been relinked to the next task in the queue.


Copyright © 1999, Dr. Dobb's Journal
Aug99: A Task Dispatcher for Embedded Systems

Table 1: The timer queue (additions are in italics, changes in bold).


Copyright © 1999, Dr. Dobb's Journal

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.