On Engineering

Licensing, quantitative software engineering, and the demise of his antediluvian printer are on Ed's mind.


May 16, 2006
URL:http://www.drdobbs.com/embedded-systems/on-engineering/187203740

Ed is an EE, a PE, and author in Poughkeepssie, NY. Contact him at [email protected] with "Dr Dobb's" in the subject to avoid spam filters.


THIS PAST MARCH, I described why I let my Professional Engineering registration expire for reasons including the irrelevance of the continuing education requirements to anyone who's not a civil engineer. Several readers offered further views that they've allowed me to share and a recent book offers some insights. I'll wrap up with a brief critique of product engineering gone awry.

Why a PE?

Two readers remind me that PE licenses are not restricted to degreed engineers, as documented at www.op.nysed.gov/pelic.htm. If you have an ABET-accredited degree, you must have four years of experience to sit for the PE exam, but, lacking such a degree, the FE and PE exams each require six years of experience. The New York State (NYS) rules warn that "The quality of the experience, not merely the calendar time, will be evaluated."

Dan Samber aptly summarizes the majority opinion: "By far, the greatest benefit for me has been simply having some initials to put at the end of my name (which is huge in the medical research environment where I work, except that most everyone thinks that PE stands for 'Pulmonary Embolism')."

Tom Székely, PE, with a résumé to die for, observes "if you're a software engineer who writes code for, say, the Air Traffic Control system..., you probably need to be licensed...ditto if you're a[n] MCSE responsible for networking [those] computers."

However, the distinction between noncritical and life-threatening applications may be quite hard to discern, particularly in large-scale systems. For example, the 2003 Northeast power-system collapse showed that small, local causes, whether neglected tree-trimming or failures in load monitoring, can trigger a complete loss of power across several states. It's not clear to me that licensing the programmers producing the firmware inside cherry picker lifts or power monitors would have any effect on the outcome.

It's equally unclear, to me and several correspondents, whether the entire licensing apparatus has had any real positive effect on the profession. The NYS board used to send out a newsletter documenting enforcement actions, most of which involved folks accused of performing unlicensed engineering or surveying. The cases involving professional malpractice were few and far between, which means that engineers are unusually ethical (possible) or that enforcement is unusually relaxed (probable).

Dave Lynch, who has built both targets and weapons systems, concurs. Commenting on registration for architects, he notes that "At best, professional licensing is like SATs—a test of endurance, ambition, and will, a barrier to entry with at most minimal applicability to one's actual skill within the profession. I do not see that value as sufficient."

He suggests a specialized licensing structure, confined to life-safety projects, where you're examined and approved on your knowledge of specific topics. For example, even with a PE license, you may not design earthquake-resistant structures without demonstrating your mastery of that subject.

State boards license engineers for work performed within each state, with no corresponding national-level requirements. I had, at one time, three separate PE licenses, each with slightly different requirements and, of course, three different fees. Which state board should license programmers writing software for use in every state?

Several PEs made scathing remarks about MCSE-style troubleshooters who lack the background or breadth of knowledge one would expect from even a specialized engineer. Indeed, I just saw a description of an MCSE prep course that promises a dozen different certifications after an intensive 15-day boot camp. The prerequisite seems to be two years of prior experience and, perhaps, an A+ tech certificate. This type of certification seems overtly tied to specific products, rather than covering a broad range of general knowledge, but that seems typical of a field without a broadly accepted body of knowledge.

The Craft of Code

Andrew Todd sent a lengthy essay on a key difference between engineering fields:

...Electricals and Mechanicals build movable objects by mass production. Civil engineers, on the other hand, build "one-offs" on site...Construction is the last medieval industry.
In the mass-production industries, federal regulation supersedes state professional regulation...state licensing practically goes hand-in-hand with state patronage.

An interesting aspect of Article 145 of the NYS Education Law Section 7211.1.d Mandatory continuing education for professional engineers (www.op.nysed.gov/article145.htm) bolsters his argument:

Professional engineers directly employed on a full-time basis by the state of New York, its agencies, public authorities, public benefit corporations, or local governmental units prior to January first, two thousand four and who are represented by a collective bargaining unit, at all times when so employed shall be deemed to have satisfied the continuing education requirements of this section.

So, oddly, the NY Professional Engineers who design structures used by the public are exempt from the state's continuing education requirement, while PEs who design privately owned buildings must stay up to date. Should you think licensing software developers will produce greater safety, pay careful attention to how well-intentioned rules play out in practice.

Software combines medieval craftsmanship with mass production in a peculiar mix: Individual practitioners hammer out much of the code, using nothing more complex than a text editor, while duplicating the finished product incurs essentially no additional expense. The overall process, however, lacks much of the design rigor taken for granted by engineers.

For example, one of my uncles (yes, a PE) drew up the plans for his church's renovation and expansion. The excavation contractor asked him for permission to raise the project's grade by six inches to balance the cuts and fills, thus eliminating earth haulage. Translated, that meant they could avoid trucking dirt to or from the project by simply raising the overall ground level by half a foot. He readily agreed.

Imagine a programming contractor proposing that increasing average interrupt latency by 10 ms would reduce the module size by 5 percent and improve the error rate by 3 percent. When was the last time you saw precise numeric relations between various aspects of a programming project, before the coding began, that actually worked out as intended?

While we can observe relationships among timing, code size, error rate, and so forth after the fact, that data has essentially no predictive value for future projects. Worst of all, there's no way to quantify the effect of a small change in one part of the project on any global property: The smallest changes can (and, alas, often do) have catastrophic consequences.

In both the real and programming worlds, the sum of many small changes need not be equivalent to one large change. What's different with programming is the inability to anticipate the effect of a small change, because everything is deeply connected. Simply updating a project's compiler to a new minor version can convert unchanged source code into dysfunctional rubble, roughly the equivalent of discovering that installing new bolts on a bulldozer blade renders it unable to move earth.

Andrew's observations suggest that programming is ripe for both state licensing and federal regulation. I fear the worst of both worlds: Various states will require programmers to pass irrelevant licensing exams, coupled with overall federal requirements for code quality and performance that simply cannot be met by any software project management tools at our command. Hope, however, springs eternal.

By the Numbers

A copy of Trustworthy Systems Through Quantitative Software Engineering, by Lawrence Bernstein and C.M. Yuhas (ISBN 0471696919) recently flew over the transom. I had some trouble opening this one, if only because "Trusted Computing" has poisoned the namespace, but they use "trustworthy" with its original sense of reliability or responsibility.

The book forms the basis of an undergraduate course at Stevens Institute of Technology and thus lacks the encrustation of Greek variables often decorating higher level academic tomes. What it does have are many, many case studies, historical notes, and examples of how projects both succeed and fail. That seems a proper trade-off, as a preoccupation with mathematical formalism often masks a woeful lack of real-world applicability.

They observe that "Most current software theory focuses on its static behavior by analyzing source listings. There is little theory on its dynamic behavior and its performance under load. [...] Software engineers cannot ensure that a small change in software will produce only a small change in system performance."

The design of a program, unlike that of a physical gadget, is essentially the program itself. The tantalizing prospect of executable specifications means that the design becomes the program, ideally without human intervention. Given complete specifications and a good design, the program should work correctly, right?

"Magic Number!" boxes throughout the book summarize important values, one of which notes that "Only 40-60 percent of the system requirements are known at the start of the project. The rest emerge from studies of system use."

Unlike a construction project, where you can compute the effect of moving ever-so-many cubic yards of earth from here to there, a software project manager simply cannot estimate the effect of a seemingly small specification change. When the sum total of those changes equals the original specs, it's a wonder that any software projects reach a successful conclusion!

Another interesting aspect of software projects is that they tend to outlive the tools used to produce them. A recent foray through my heap of code showed that essentially none of it could be compiled today without significant changes, even for programs written in nominally standard languages. Large-scale projects with careful attention to portability may fare better, but I'm sure every organization has at least one "read-only" program in regular use: None dare meddle lest it seize up.

The design documentation for a software project often falls by the wayside, being relegated to three-ring binders in somebody's office. Fast-forward two decades: The guy retires and those binders quietly vanish, either into the dumpster or to his shelves at home as a memento.

Online documentation seems to offer a way out of that trap, but once again software's evanescent nature works against you. Two decades ago, Wordstar seemed like a good bet for a permanent document standard and, if not, then surely WordPerfect. Trust your doc to a proprietary program, fast-forward two decades, and you're sunk.

In short, read the book to get an overview of how tough software design really is. I hold that "software engineering" will remain an oxymoron until project managers can make quantitative trade-offs based on real numbers, which is certainly not true today.

HP2000C: End of the Era

Back in September 2002, I detailed my experiences with a Hewlett-Packard HP2000C inkjet printer. I managed to keep that hulk limping along for another four years, swapping parts cannibalized from three printers sent by another disgruntled HP customer. After the ensuing Frankenprinter obliterated a pair of brand-new printheads (which I bought in the vain hope that it would continue working), I decided that enough was enough.

These printers rated rather low in customer satisfaction, at least to judge from the many despairing messages in various user forums over the years. The drivers for Windows XP and Linux provide bare-bones functionality that reduce debugging to blindly trying new ink cartridges and printheads at $30+ a pop. If the failure reoccurred, well, the problem must be in the printer. Unfortunately, the printer could erroneously mark the consumables "invalid" so they would not work in any other HP2000C printer.

The printer's mechanical design has a weak spot that exacerbates the problem. An ink pump inside each cartridge, powered by cam-driven plungers in the printer's ink station, moves fluid through the long tubes from the pump station to the printheads. Four stout tension springs maintain pressure on the plungers.

As shown in Figure 1, the springs were stout enough to rip their anchor pylons right out of the base plate or snap off the little hooks at the top. This could be due to an overestimate of the plastic's strength or an overoptimistic stress calculation. Such problems should be familiar to any software designer, although the notion of incorrect numeric results might be novel.

[Click image to view at full size]

Figure 1: The white plungers at the top of the picture power the pump inside each HP2000C ink cartridge. The springs maintaining pressure on those plungers tend to rip their plastic anchors out of the base plate.

In any event, the mechanical failure presents itself as an empty ink cartridge, with replacement cartridges becoming empty almost immediately after insertion. The firmware's error codes do not include "Mechanical failure" because, obviously, there's no way for it to detect such a thing. Arriving at the correct diagnosis could use up many perfectly good cartridges.

Of course, my printer's plastic first failed a few months out of warranty and again a year or so later. Being that sort of bear, I repaired the fractures with liberal doses of epoxy and aluminum sheet. Most customers aren't willing to do that, however, and I suspect many simply scrapped out their printer rather than returning it to the HP repair center.

The lesson to be learned from this is that your gadget's design must keep the user's interests in mind, even during abnormal conditions that you never expected. Firmware has a very limited worldview and should not make irrevocable decisions that impose economic hardship on your customers.

The entire justification for putting logic chips into ink tanks and printheads seems to boil down to forcing customers to buy replacements directly from the OEM. It seems to me that bad ink won't damage the printer in any way that can't be cured with a new printhead, which they'd surely buy from that same OEM anyway. That may be a justifiable business plan, but the firmware should never, ever destroy the function of new consumables based on problems that lie elsewhere.

While it's entirely possible that HP's current-production Business Inkjet 2300 has more sensible internal logic and better mechanical stability, I'll never know for sure. The HP2000C has pretty much destroyed my confidence in HP's legendary high-quality engineering.

Drop me a note if you can use a box of HP No. 10 ink cartridges, a bunch of dead No. 10 printheads, and four nearly full pints of refill ink. If you can use some (presumably dead) HP2000C printers, too, I have a stockpile. First responses before July 2006 get first pick, you pay shipping, and nothing's guaranteed to be anything in particular.

Reentry Checklist

IEEE Spectrum regularly reports on power system issues at www.spectrum.ieee.org.

The Marx Brothers' "Why a Duck" skit from Cocoanuts may remind you of some recent legal shenanigans. Find it at archaeology.about.com/blmarx.htm.

More on Wordstar at en.wikipedia.org/wiki/Wordstar and WordPerfect at en.wikipedia.org/wiki/Wordperfect.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.