Testing and Debugging DSP Systems: Part 1

Part one of this six-part series introduces the hardware used for debugging, the debugging challenges facing DSP programmers, and debugging methodologies.


February 22, 2007
URL:http://www.drdobbs.com/tools/testing-and-debugging-dsp-systems-part-1/197008324

Part two explains the workings of the JTAG (IEEE 1149.1) boundary-scan technology.

In software development, perhaps the most critical, yet least predictable stage in the process is debugging. Many factors come into play when debugging software applications. Among these factors, time is of the utmost importance. The time required to set up and debug a software application can have significant impacts on time-to-market, meeting customer expectations, and the financial impact of a well developed product that succeeds in the market. The integration of an application follows a model of multiple spirals through the stages of build, load, debug/tune, and change, as shown in Figure 1.


Figure 1 The integration and debug cycle. The goal is to minimize the number of times around this loop as well as minimizing the time spent in each segment.

Debugging embedded real-time systems is part art and part science. The tools and techniques used in debugging and integrating these systems have a significant impact on the amount of time spent in the debug, integration, and test phase. The more visibility we gain into the running system, the faster we are able to detect and fix bugs.

One of the more traditional and simplest ways of gaining visibility into the system is to add messages at certain points in the software to output information about the state of the system. These messages can be in the form of "printf" statements output to a monitor or the blinking of a LED or set of LEDs to indicate system status and health. Each function or task can begin by outputting a status message to indicate that the system has made it to a certain point in the program. If the system fails at some point, diagnosis of the output messages can help the engineer isolate the problem by knowing where the system was last "good." Of course, instrumenting the system in this way introduces overhead, which changes the behavior of the system. The engineer must either remove the instrumentation after the system has been tested and re-validate the system before shipping, or ship the system with the instrumented code in the system. The engineer must usually ship what is being tested (including this instrumentation) and test what is going to be shipped.

Engineers can use more sophisticated debug approaches to reduce the time spent in the integration and test phase. One approach is the use of a device called a "debug monitor." A debug monitor is a relatively small piece of code embedded in the target application or integrated into the micro controller or DSP core that communicates over a serial interface to a host computer[1]. These monitors provide the ability to download code, read and write DSP memory and registers, set simple and complex breakpoints, single step the program, and profile source code at some level.

For systems with ROM based software programs, another form of debug monitor, called a ROM emulator, is used. A ROM emulator is a plug-in replacement for the target system ROM devices. The plug-in device connects to the host computer over a link (serial, parallel, Ethernet, and so on). A ROM emulator provides the engineer with faster turnaround time during the debugging process. Instead of re-programming the ROM device using a ROM programmer for each software iteration, the code can instead be downloaded into fast RAM on the ROM emulator. The system is then run as if code was running out of the ROM device.

Debug monitors and ROM monitors certainly provide a large benefit to the embedded system debug phase. But, as embedded processors become faster and faster, and as systems migrate to system on a chip solutions, visibility into the internal processor presents challenges that require even more sophisticated debug solutions.

Integrating and debugging complex digital systems also requires the use of sophisticated and complex debug tools such as logic analyzers. A logic analyzer is a tool that allows the system integrator to capture and display digital signals in various formats such as bit, byte, and word formats. Using a logic analyzer, the system integrator can analyze digital behavior such as:

Logic analyzers are powerful tools that are also portable and very versatile. These tools require a small learning curve as well as high initial investment (depending on how much capability is needed by the tool and what clock rates need to be supported). By using triggering mechanisms in the tool, the system integrator can capture data into large buffers within the logic analyzer. This data can be pre-trigger data, post-trigger data, or a combination. Traces can be saved and printed and the data can be filtered in a number of different ways.

A fundamental disadvantage to using a logic analyzer for embedded DSP software debug is that they are complex hardware debug tools being used for software debug. The degree of success using a logic analyzer is related to how hardware savvy the system integrator is since the tool is hardware-debug based and may require complex setup and configuration to get the right information to analyze.

Another disadvantage to using a logic analyzer for system debug is visibility of the signals. A logic analyzer needs to be connected to the pins of the DSP device in order to get visibility into the system. The visibility is limited by the types of pins on the DSP. As DSP devices become more integrated into system on a chip capability, the visibility to see what is going on inside the device diminishes.

Footnote
1. Arnold Berger provides a very intuitive overview of debuggers in Embedded Systems Design, Chapter 6, CMP Books, copyright 2002.Vanishing visibility
In 1988, the embedded system industry went through a change from conventional In Circuit Emulation[2] to scan based emulation. This was motivated by design cycle time pressures and newly available space on the embedded device for on-chip emulation. Scan-based, or JTAG, emulation is now widely preferred over the older and more expensive "in-circuit emulation," or "ICE" technology.

Debug Challenges for DSP
The have been a number of industry forces that have been changing the DSP system development landscape:

System level integration; As application complexity has increased and system on a chip complexity has led to smaller footprints, the visibility into the system components has diminished (Figure 2). Embedded system buses lead to an instrumentation challenge. Wider system buses also lead to system bandwidth issues. Program control in these environments is difficult.


Figure 2 System level integration leads to diminishing visibility (courtesy of Texas Instruments)

In order to restore visibility, DSP vendors have addressed the issue on several fronts:

On-chip instrumentation – As systems become more integrated, on-chip visibility into the device operation is becoming blocked (Figure 3). Bus snooping logic analyzer functions have been implemented in on-chip logic. Examples of this include triggering logic to find the events of interest, trace collection and export logic to allowing the viewing of events, and maximizing export bandwidth per available pin on the DSP core. Debug control is through an emulator which extracts the information of interest.


Figure 3 Vanishing visibility requires advanced debug logic on-chip (courtesy of Texas Instruments)

Off chip collection foundation – Once the data is exported from the DSP core, the data must be stored, processed, filtered, and formatted in such a way as to be useful to those test engineers to interpret the data meaningfully.

Data visualization capability – DSP integration capabilities include the ability to easily view the data in different configurations. The entire chain is shown in Figure 4. The logic analyzer functions are now on-chip, the control and instrumentation collection is primarily through the emulation controller, and the data is displayed on the host in a visualization container. The key challenge, then, is to properly configure the system to collect the right data at the right time to catch the right problem.


(Click to enlarge)

Figure 4 DSP tools are used to visualize debug data extracted from the DSP (courtesy of Texas Instruments)

Application space diversity – DSP applications are becoming more diverse and this presents challenges to DSP test and integration engineers. This diverse application space spectrum requires different cost models for debug support:

User development environment; the development environment for DSP developers is changing and DSP debug technologies are changing to accommodate these new environments. DSP engineers are transitioning debug platforms from desktop PC systems to laptops that are portable to the field for debug in the customer's environment. Portable remote applications require portable DSP debug environments.

Continued clock rate increases; as DSP core clock speeds increase more data is required to perform debugging. In fact, the amount of data required to perform debug and tuning is directly proportional to the DSP core clock speed. More DSP pins and more data per pin are required to maintain the required visibility into the behavior of the device.

The different levels of DSP debug capability provide a range of benefits in the integration process. The out of box experience allows the user to become productive as quickly as possible. Basic debug allows the DSP developer to get the application up and running. Advanced debug capabilities, such as the ability to capture high bandwidth data in real-time, allow the developer to get the application running in real time. Basic tuning capabilities provide the ability to perform code size and performance tuning.

The combined on and off chip emulation capabilities provide a variety of benefits. Execution control in real-time provides standard capabilities such as step, run, breakpoints (program counter) and data watchpoints. Advanced event triggering (AET) capabilities provide visibility and control of the programmer's model. Real-time data collection provides real-time visibility into algorithm behavior by tuning a stable program. Trace capabilities provide real-time visibility into the program flow through the process of debugging an unstable program.

Used with the permission of the publisher, Newnes/Elsevier, this series of six articles is based on chapter nine of "DSP Software Development Techniques for Embedded and Real-Time Systems," by Robert Oshana.

Part two explains the workings of the JTAG (IEEE 1149.1) boundary-scan technology. It defines the test pins and the test process associated with a JTAG port.

Footnote
2. In-circuit emulation technology replaces the target processor with a device that acts like, or "emulates," the original device, but has additional pins to make internal structures on the device, like buses, visible. ICE modules allow full access to the programmer model of the processor. These devices also allow for hardware breakpoints, execution control, trace, and other debug functions.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.