Ed is an EE, PE, and author in Poughkeepsie, New York. You can contact him at [email protected]
Who says human behavior never changes? At the Embedded Systems Conference, I saw several hands-free cellphoners sitting on benches while talking, rather than caroming off hallway pillars, and obnoxiously over-amped show-floor hucksters were conspicuous by their absence.
Perhaps we're seeing evolution in action? One can hope!
This month, I'll cover some highlights of Boston's overlapping Embedded Systems Conference and Software Development East shows last November. Although the shows remain smaller than a few years ago, there was no shortage of useful information and I see definite signs that embedded systems are entering the mainstream.
Well, at least a little bit. Enterprise programmers (I think of Star Trek every time I read that adjective) measure program performance with a stopwatch and can't imagine why you'd ever look at assembler listings. We know better!
Nick Tredennick's keynote address reviewed his Zeros Model of the microprocessor universe. Basically, he posits four market segments: Zero Cost, Zero Power, Zero Delay, and Zero Volume. While some systems lie entirely within one segment, the most interesting and profitable projects fall into the areas defined by at least two Zeros.
The Zero Cost segment represents consumer goods, where the cost of each unit sold drives the entire design process: If the unit cost exceeds a very small number, the product simply won't sell. Amortizing the engineering effort over the total volume permits a surprising amount of work devoted to squeezing acceptable results from seemingly inadequate hardware.
Worldwide annual shipments of microprocessors added up to about eight-billion units in recent years. Eight-bit CPUs accounted for just over half of the total, 4-bit CPUs just under a quarter, and the remaining quarter divides neatly between 32-bit CPUs and "all others."
How many ads have you seen touting the latest 4-bit development tools? Even 8-bitters get relatively little press these days, even though their annual sales add up to nearly one CPU per person worldwide!
Zero Power products should run, as Tredennick puts it, "on weak ambient light" or from a soldered-in-place battery that outlives the purchaser. This definition encompasses portable equipment that chews up far too many batteries, as well as always-on devices that contribute to the huge base load in electrical power distribution systems.
Ready for another surprise? Find your most recent electrical bill and calculate your total cost per kilowatt-hour including all taxes and surcharges. Your number will probably be lower than our 9 cents per kWh (unless you're in California). Multiply that by the wattage of your favorite always-on gizmo, multiply by 9, and divide by 100 to find how much it costs to keep that thing running all year. (Hint: If you don't know the numbers, figure a buck a year per always-on watt.)
For example, a junker PC on my network provides site-wide ad filtering using Privoxy atop Linux. That PC draws 35 watts all day every day and costs $26 per year. Add 10 W each for the cable modem and firewalling router, a few watts for the phone answering machine, a clock radio here and a cordless phone there and I have perhaps 100 watts at 24×7.
There's obviously considerable room for improvement in the Zero Power category. Running from weak ambient light may be a joke, but reducing the wattage of line-powered devices would have some very desirable consequences. Even better, imagine if your hyperthyroid handheld ran for months instead of hours on a charge.
Zero Delay devices must perform computations and deliver results instantly, if not sooner. These devices traditionally had no overlap with either the Zero Cost or Zero Power domains, but even such lowly devices as portable MP3 players now demand all three attributes.
The Zero Volume market segment includes essentially everything you see advertised in technical publicationsrouters, servers, high-end PCs, and anything else that doesn't sell in the millions. These devices also tend to fall in the Zero Delay group, generally aren't Zero Power and, to keep their suppliers in business, exclude Zero Cost design methods. The Zero Volume segment is a very small circle when compared to Zero Cost products, but it gets oodles of coverage.
The entire PC marketplace used to occupy the area at the intersection of Zero Cost and Zero Delay, well outside the Zero Power domain. Tredennick explained why this stopped being true in the late 1990s, when CPU speed became fast enough to no longer limit overall performance.
Memory and disk access times cannot keep pace with CPU cycle times because of the physics involved in shuttling data across nontrivial distances, let alone to mechanical devices. As internal CPU clocks exceed 1 GHz, each external memory access requires about 50 clocks and a disk access can take tens of millions of cycles. Even local caching cannot eliminate that effect because each cache miss causes a disproportionate delay as the system scrambles to find something else to do.
As a result, people now find that their PCs are "fast enough" for all practical purposes and upgrading a PC (pronounced "trashing the old one") based on CPU speed brings no particular benefit. Worse, marketing CPUs based on Zero Delay criteria won't deliver PC customers to the cash register.
As the concept we associate with desktop PCs morphs from laptops into something truly portable, Tredennick predicts that their design methods must shift from Zero Delay to Zero Power with, of course, Zero Cost in the background. He also notes that the increasing emphasis on lower power and lower selling price is a necessary and natural consequence of the physics.
Immediately after leaving Tredennick's talk, I discovered that a company formed around a novel 16-bit microcontroller architecture had gone bankrupt after two years of trying to clear the gantry. Although their product was intended for Zero Power applications, it evidently did not fall deep enough into the Zero Cost domain to succeed. It seems that designs that outgrow 8-bit CPUs go directly to 32-bit land, even though you'd think 32-bitters cost more than 8-bitters.
What's happening is that all those old fab lines can produce trailing-edge parts far less expensively than a new fab line can hammer out new chips. Think about it: Amortizing a $2-billion fab line takes a lot of $1 chips! Tredennick showed that the sweet spot lies well behind the cutting edge; expect lots of good deals on small, Zero-Cost microprocessors.
One interesting topic appeared in several ESC and SD presentationsC++ has matured to the point where (teams of) mere mortals can actually produce decent embedded applications. This was untrue two or three years ago, so sometimes good things come to those who wait. If, of course, a market window hasn't slammed shut in the interim.
C++ sports both virulent design-by-committee flaws and slick features that can greatly simplify embedded programming. The difficulties of employing the latter without running afoul of the former have given C++ a deservedly bad rep in embedded circles, even without having different compilers implement different language features not only differently but incompatibly.
Dan Saks's full-day "Raising the Level of Low-Level I/O in C++" tutorial showed that C++ "can be as good as the ugliest C you can write." A thin layer of code that provides meaningful names and operations for the underlying hardware I/O can go a long way to making higher level programs more usable and less error prone. Best of all, current compilers optimize nearly all of the resulting code down to hand-tuned efficiency, so the additional syntactic protection doesn't actually cost anything at all.
He does, however, point out that you absolutely must measure the results, both by timing and by reading the assembler code, before leaping to any conclusions. You may find that a specific programming idiom that works fine on one system becomes a complete dog on another. For example, although a spiffy new C++ const variable may look more stylish than an old-fashioned C-style #define constant, the former probably requires an expensive memory fetch or two rather than an immediate operand load...unless your compiler can optimize the variable away.
The number of possible gotchas increases as your code approaches the hardware, justifying an insulating layer to encapsulate those problems. For example, suppose you must perform a wait loop while accessing a particular register: If you forget the volatile keyword on the spin-loop variable, the compiler may simply optimize the whole loop out of existence. You want that sort of thing in exactly one place to keep control over it!
The consensus of one after-hours group was that C++ remains sufficiently difficult to use that only horizontal-market applications can afford its complexity. Vertical-market line-of-business applications will use either Java or C# because programmers can produce useful work more easily (pronounced either "faster" or "cheaper" depending on your boss).
It seems to me that large-scale embedded apps may be forced into C++ simply because it provides both decent performance and mainstream acceptance (pronounced "people who can recognize the syntax").
Although Java has also matured into a useful language, its applicability to embedded and real-time projects remains up for grabs. Paul Tyma reports that Sun's HotSpot Java Virtual Machine does an outstanding job of optimizing code, but that it's still not quite in the same league as C++, let alone gnarly C.
The difference narrows as you add features to your C and C++ code that are built into the Java language and JVM. As nearly as I can tell, if your application can withstand the code size and speed issues of Java, you can wring more performance out of the same hardware with C++.
That consideration is vital in the Zero-Cost domain because throwing in a few more megs of RAM or a faster CPU adds direct product cost. There's a reason half of the microprocessors sold every year have an 8-bit ALU!
Holes In Space
Both shows featured tutorials and classes on software reliability, which Bruce Douglass of iLogix pointed out is something of a misnomer: Software is utterly reliable because it always does the same thing given the same inputs. You're actually interested in the number of cases where it does not do what the specs say it should or, alas, where the specs leave enough leeway that the code doesn't catch a problem.
He emphasized a point that I wish would appear in every programming book: "Make it work correctly, then make it work fast." Most go-fast optimizations have surprisingly little effect, so you must measure the code before and after each change to avoid creating slow and obscure code.
Scott Meyers described a catastrophic crash in a computer-controlled sawmill. It seems the diameter of logs from an old-growth section of forest exceeded (dramatic pause) 65.535 inches. The failure mode involved force-feeding the uncut log away from the blade, completely through the operator's enclosure. Fortunately, the sawyer was alert, exceedingly fleet of foot, and escaped without injury.
Evidently, the resolutely 16-bit software, which had been running successfully for nearly two decades, didn't check for all possible overflow conditions. After all, who could imagine a log more than 5 feet in diameter? That's an example of what Meyers calls "Keyhole Problems," wherein software enforces an arbitrary limit on the real world. The results range from annoying to deadly.
Chuck Allison explained how to use the C++ exception facility in embedded systems. He observes that nobody checks return codes from library functions (but good old printf can and does fail, particularly in embedded systems!) and that exiting an embedded program can be embarrassing at best. To forestall that situation, you absolutely must build error handling into your system from the beginning and C++ provides a surprisingly clean and efficient (in the run-time sense) way to make that happen.
The story of the first Ariane 5 launch in 1996 appeared in several talks as an exemplar of a how a project with all the right buzzwords can go badly wrong. After all, the Ariane 5 booster featured code and hardware reuse, extensive testing, and all the Right Embedded Stuff. Catching this exception might have been of interest, but when you're two miles beyond the gantry and accelerating upward at full throttle you can't just reboot that sucker.
Closer to home, the Boston Sheraton hotel features high-speed Internet access delivered by Cat-5 cable directly to each room for $9.95/day. One speaker discovered that the hotel network does not include a firewall: His PC was penetrated from, as nearly as he could tell, Brazil. He fought the intruder to a standstill shortly before his talk, although his laptop was somewhat the worse for wear. He always travels with a mirrored hard drive for just such situations.
The reliability and design talks were well attended, which I regard as a Good Thing. People and companies seem increasingly interested in discovering more about how to design solid projects and write good code, so perhaps we're turning the corner on the Bad Old Days.
The CMP Media empire puts on both the Embedded Systems and Software Development shows. DDJ, a CMP publication, picked up the tab for a week in Boston at the shows. CMP hasn't done any arm-twisting.
You may still find traces of the 2002 shows at http://www.esconline.com/ and http://www.sdexpo.com/. If not, take a look at the upcoming events, which will likely feature similar presentations.
Tredennick's keynote speech slides are at http://www.esconline.com/db_area/bos02keynote.pdf. Scott Meyers has more information at http://www.aristeia.com/, Bruce Douglass is at http://www.ilogix.com/, Chuck Allison is at http://www .freshsources.com/, Paul Tyma is at http://www.preemptive.com, and Java HotSpot at http://java.sun.com/products/hotspot/.
A fine write-up of the Ariane overflow is at http://www.around.com/ariane.html with more at http://www.rvs.uni-bielefeld .de/publications/Reports/ariane.html.
The Privoxy (Privacy Proxy) software (http://www.privoxy.org/) provides site-wide ad filtering, popup suppression, and similar functions from a single Linux or Windows machine on your network. It is not a firewall, so a firewalling router for your cable modem remains a Very Good Idea, indeed.