Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Reality Checks


Aug02: Embedded Space

Ed is an EE, PE, and author in Poughkeepsie, New York. You can contact him at [email protected].


No plan survives first contact with the enemy.

— Karl von Clausewitz or Helmuth von Moltke or Frederick the Great or any other military theorist

Engineering design depends on models that simplify the real world and allow reasonably direct mathematical treatment of otherwise intractable situations. Over the centuries, we've done pretty well at modeling the real world, as evidenced by how well our structures and materials work.

Programming hasn't achieved that level of dependability and stability, as shown by the number of failures attributed to computers and their supporting infrastructures. It appears impossible to either produce a reliable program without a major development effort or transfer the lessons learned from one project to the next, perhaps because programming tools and techniques have such a fleeting life.

Even for seemingly well-defined problems and projects, each embedded developer must learn an extremely hard lesson: Each model must be validated against the real world for each application. A computationally feasible model necessarily glosses over some gnarly bits of reality that aren't relevant, but precisely which bits a model ignores makes the difference between success and failure.

The Trinity College Fire-Fighting Robots competition in Hartford, CT, provided many examples of how good intentions and workable models can run afoul of the real world. Even embedded developers who know they have a good model of their application and its use can learn from the folks I watched.

Contest attendees also saw a presentation describing how robots contributed to search-and-rescue missions at Ground Zero last year and the lessons learned by a group of experts on the subject. In that situation, the difference between reality and model can be the difference between life and death.

As embedded systems move from the factory floor to the hands of ordinary folks, our models must grow to encompass more than nicely linearized relationships and taut cause-and-effect chains. The gnarly bits may give us the most trouble, but they're also the most important.

The Ideal Model

Although I previously mentioned the Trinity contest (see "Embedded Space," DDJ, April 2002), a brief review is in order. All the rules and details — as well as many interesting photos — are available at http://www.trincoll.edu/events/robot/.

Contestants must design and build a robot that can start from an arbitrary point in a maze, navigate through hallways and rooms, locate and extinguish a burning candle, then return to the starting point. Scoring depends primarily on time, with an assortment of adders, subtracters, and factors to account for various conditions and capabilities.

The maze layout is known ahead of time, so the robot can rely on an internal map. The walls are white and the floor is black, with white stripes at room entrances and at the candle location. Overhead illumination comes from gymnasium-style floodlights. In short, you might start by modeling the layout as a rectangular, binary-colored grid.

This sounds simple enough until you actually think about how you'd structure the code to read the sensors and drive the motors. It becomes more complicated when you begin to consider which sensors and actuators to use. The task becomes nearly intractable when you realize that the whole affair must fit in a self-contained, self-propelled, one-cubic foot machine.

Bright Lights

Many contestants start by underestimating the difficulty of identifying the candle. After all, it's a bright spot and should be easy to detect, even against a white wall. A simple photocell will suffice, right?

Although the candle flame burns atop a standard dining-table taper, it does not have a rigidly controlled intensity. The candle height varies over a few inches, the flame may flicker in the breeze and, even though it's always seen against a white wall, what lies beyond that wall is completely uncontrolled: A TV camera floodlight halfway across the gym may light up at any time.

Thus, modeling the candle flame as a unique point source of white light exceeding a fixed intensity threshold doesn't work very well at all, which some designers find out the hard way.

The next level of complexity involves comparing the brightest spot of light to its surroundings, with a specific contrast threshold determining when a flame is in view. This requires the robot to separate the spot from the scene and calculate an intensity ratio, which requires more than a simple photocell.

In fact, at least one robot scanned the scene with a TV camera and applied image processing to track a candle-flame-sized hot spot. Fairly obviously, that computational load demanded a hefty processor with nontrivial programming.

Realizing that a candle flame emits more than just white light provides a different model. An infrared sensor should be able to detect the heat of a candle flame against the relatively cool background of the maze and, because flames are very hot, a fixed threshold ought to work fine.

It turns out that candle flames are not the only bright IR emitters in the gym. The overhead lights have high-pressure sodium bulbs with a very strong emission line centered exactly on the peak sensitivity of silicon IR photodetectors. Even an excellent optical filter can't separate direct candle IR from reflected HP-Na IR, so the IR flame model doesn't work well at all.

The hot particles in flames also emit ultraviolet light, which comes as a pleasant surprise to many builders. The Hamamatsu R2868 UV-TRON, an industrial flame sensor, detects radiation within a very narrow band of ultraviolet wavelengths using the photoelectric effect and gas-discharge multiplication. You can think of this gizmo as a Geiger tube for UV, because it produces current pulses from a 500-V supply (which introduces some challenges of its own for folks accustomed to 5-V logic levels!) in response to UV photons.

The UV-flame model works particularly well, as normal indoor illumination has very little energy in the UV part of the spectrum. A candle produces a torrent of pulses that's significantly different from the background level, so a robot can scan an entire room of the maze from the doorway.

The competition site, alas, is anything but normal. Photographic flash tubes produce so much UV that the rules forbid spectators from taking pictures during a robot's run. Unfortunately, some people don't realize why the rules apply to them and an errant flash has ruined many an otherwise perfect trial.

Last year, the Trinity sponsors installed fluorescent lamps over the mazes to provide shadow-free illumination. That turned out to be a disaster because fluorescent tubes emit copious quantities of UV. Every UV-TRON screamed "FIRE!" everywhere it went in the maze. This year, they're back to the HP-Na overhead lights, although the lights directly over the mazes were turned off.

Despite those problems, the UV model seems to be the key to detecting a candle flame, if only because visible and IR sensing works so poorly. Nearly every robot at Trinity sprouts a UV-TRON or two.

Strong Fields

Because the maze layout is known, navigation shouldn't pose much of a problem. After all, you can easily count motor or wheel rotations, convert that into linear distance, and update your robot's internal map. Right?

Some robots have two driven wheels and a caster, others have three wheels in a triangle, one had four swiveling wheels, and others skid-steer on tank treads. All of these have some trouble tracking a perfectly straight line: The ability to steer implies the ability to move off course.

Counting wheel turns may be easy, but the conversion to distance poses some difficulties. Wheels, particularly hard disks, slip and skid even when moving in a straight line. Softer wheels both compress irregularly and erode easily, changing the count-to-distance coefficient in real time. Treads are a computational nightmare. Miscalculating the distance traveled by each side of the robot provides the opportunity to veer into a wall while driving along a theoretically straight line.

Because dead-reckoning isn't adequate over the length of the maze, designers must include local corrections to adjust the robot's alignment. The rules include penalty points for touching the walls, providing a strong incentive for noncontact wall sensing.

Many contestants attempt to use IR emitters and detectors to measure the distance to nearby walls. A photodiode can easily detect the reflected light from a simple on-off emitter over the range of several inches, an entirely adequate distance for maze walls.

At least in the dark, that is. Because of the high ambient IR levels in the gym, sensing a DC-switched IR emitter simply doesn't work. In fact, the photodiode will return a jumble of unexpected values to a microcontroller sampling its waveform, due to the 120-Hz energy from those HP-Na lights.

Sharp makes a very handy noncontact distance sensor that uses internally modulated and demodulated IR. Internal optical filters reject other wavelengths and electrical filters select just the modulated signal. They work surprisingly well, but are more complex than many people believe is warranted. Until they see how poorly DC IR sensing works in the real maze.

Several contestants incorporated solid-state magnetic compasses into their designs, with the intent of aligning the robot's course to the external world. Within a small area, it seemed reasonable to model the earth's magnetic field as stable and unidirectional, even if it doesn't point directly to the North Geographic Pole. They discovered that the wooden gym floor hides a mesh of steel rebar and beams that causes compass needles and robots to wander aimlessly through the maze.

The moral of the story isn't that these folks are dumb. It's just that simple models describing specific, fairly limited situations may not cope with the real world. Fire-fighting robot designers regard all this as a learning experience and return each year with improved models that better match reality.

Urban SAR

There were several seminar speakers at the competition as well. Mark Micire, from the University of Southern Florida in Tampa, presented "Robots and the WTC Disaster" and described the ongoing work of the Center for Robot-Assisted Search And Rescue (CRASAR). Many of the images he showed come from the collection at http://www.csee.usf.edu/robotics/USAR.

CRASAR is responsible for developing robots, both tele-operated and autonomous, that can enter hazardous locations, search for victims, assess the degree of structural damage, and act as forward observers to direct human rescuers. The Center had received its charter and a significant collection of decommissioned military robotic hardware in July 2001.

Urban and rural SAR operations proceed along different lines with different equipment because cities and forests pose completely different challenges. CRASAR has been concentrating on urban SAR, particularly in structures damaged by either natural disasters or military action. Their operational model includes estimates of structural failure modes, the likely extent of damage, and the techniques and hardware most likely to be helpful in those situations.

It seems that buildings generally collapse in reasonably predictable ways, with ceiling and roof panels draping over crushed walls and partitions to form pockets where victims can survive for relatively long times. Access to a damaged structure poses no problem because support vehicles can drive close to it. Power may not be available in the immediate area, but most blackouts are fairly localized.

The CRASAR team had considerable experience working with firefighters and rescue teams, which gave them confidence that their models had a reasonable basis in reality. Several members took the specialized training required for participation in urban SAR missions, which gave them access to hazardous situations that excluded even firefighters and other rescue personnel.

They left for New York City in the afternoon of September 11, 2001, aboard a borrowed van stuffed with as much hardware as it could carry. Based on their experience, they had some expectation of what they would see and do when they arrived. What they found in the nightmarish landscape that had been the World Trade Center forced an immediate change in plans.

SAR in Hell

The amount of energy released during the fall of the towers placed the event well outside any previous SAR model. By compressing the structure to about 3 percent of its previous volume, the energy eliminated essentially all the internal spaces that normally shelter survivors. That same energy distributed wreckage across 15 acres and hindered access to the most promising access points.

The machines CRASAR used at the WTC scene were essentially remote-controlled crawlers with video cameras and lights, not true robots. Some used RF links and others used wired tethers, but each required direct human control. CRASAR's research programs will change that by reducing the operator's workload and improving the robot's performance in difficult situations.

Because humans had to backpack their equipment across the debris field to tiny openings, smaller robots turned out to be more useful than expected. In some situations a CRASAR team deployed a crawler by swinging it lasso-style on the end of its tether and tossing it across a gap into a hole. They now know that robots must withstand impact with jagged debris in the course of normal use.

Although wireless data transmission is currently chic, radio-controlled robots proved troublesome because the dense wreckage formed a nearly perfect RF shield: A transmitter only a few feet away might suddenly return no signal at all. A rope allowed them to retrieve wireless units from shielded areas and unjam crawlers dangling from protruding debris. Robots that continued forward during a radio blackout might simply vanish over a steep edge, which favored stop-and-wait programming over continuing until contact resumed.

Grippers seem to be a mandatory feature of any robot, but none were used in the pile. It turns out that manipulator arms could prove to be either a blessing or a curse. They enable investigation of otherwise unidentifiable objects and get snagged on dangling wires. They increase the size of the vehicle, reduce its ability to get into tight places, and provide a high-leverage point of failure. It all depends on the SAR model.

Similarly, heat-detecting cameras can normally identify warm bodies against cool backgrounds, but many of the openings into the rubble emitted oven-hot air that blinded IR-sensitive cameras. Deep inside, the pile's internal heat melted cameras, control electronics, and crawler treads. Another search group used an articulated video camera on an 18-foot pole until both the camera and the end of the pole drooped.

The CRASAR team contacted a manufacturer to ask what effect a 10:1 bleach rinse might have on a robot's internal mechanism. Even though such decontamination solutions exceeded the robot's "splash resistant" design, they hosed it down anyway, preferring a dead robot to a diseased human.

In short, nearly everything these professionals thought they knew was either wrong or an underestimate. What they learned and how they learned it should serve as a checklist for embedded designers.

Plan for Reality

Any plan reflects the planner's assumptions about the situation around the plan. Military plans must proceed based on an adversary's capabilities, engineering plans must respect physical constraints, and business plans must include legal requirements. Essentially by definition, you cannot plan for something you didn't consider or for something you regard as impossible.

The best you can hope for is a predictable failure in the face of the unexpected. What a "fail safe" design should do depends critically on expectations: Turning off a pump motor to prevent damage might seem sensible, but probably isn't what you want during an emergency. Sometimes the correct plan allows the destruction of the device itself, as codified by Asimov's Three Laws of Robotics.

Trinity contestants and CRASAR researchers have first-hand experience with what happens when reality exceeds the bounds of their design models. Both groups are hard at work on new models and new designs, ones that match their new knowledge.

Make sure your plans include a liberal dose of reality, because your gizmo will certainly encounter it.

Reentry Checklist

Although I'm sure Hamamatsu has some competition in the UV flame sensor field, its UV-TRON has a lock on the Trinity robot market. The R2868 datasheet lives at http://usa.hamamatsu.com/cmp-pdfs/detectors/C3704_TPT_1007E01.pdf.

The most-favored Sharp IR distance sensor seems to be the GP2D12, with a datasheet at http://www.sharp.co.jp/ecg/opto/products/pdf/en/osd/optical_sd/gp2d12_e.pdf.

CRASAR is affiliated with the National Institute for Urban Search and Rescue (NIUSR), with details at http://www .niusr.org/.

DDJ


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.