Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Security Measures


February, 2005: Security Measures

Ed's an EE, PE, and author in Poughkeepsie, NY. Contact him at [email protected] with "Dr Dobbs" in the subject to avoid spam filters.


The National Cyber Security Alliance recently released the results of a study showing how difficult it is to achieve even minimal security on consumer PCs. The disparity between users' expectations and reality should be particularly scary for folks working on embedded consumer products.

As of last October, about 3/4 of the surveyed users felt reasonably safe from the usual assortment of online threats, despite the fact that 2/3 have outdated antivirus software and 2/3 (presumably a different subset) lack firewall protection. Essentially none have any protection against adware or spyware, and in fact, 9/10 haven't a clue what that means.

The result is predictable: 2/3 have experienced at least one virus infection, 1/5 have a current infection, and 4/5 have adware or spyware infestations. The NCSA actually measured those numbers by sending technicians to examine the PCs. The study did not report on worms or back-door programs, perhaps lumping those with viruses.

Many of the talks at last year's Embedded Systems Conference dealt with security techniques and practices, but the ground truth indicates the battle has little to do with technology. Let's compare expectations and requirements with the actual results.

Boot, But Verify

Transactions, pretty much by definition, have at least two sides. In the PC realm, we've been mostly concerned with verifying that the server on the far end is what it claims to be, taking for granted that the PC end is honest. That level of trust is certainly unjustified now, and for distributed embedded systems, it verges on wishful thinking.

Several ESC talks described the requirements for a Trusted Computing Platform, one that is known to contain only valid hardware and software. During the power-on sequence, the hardware must verify that it has a correct, trusted configuration, using only known-good software that is either internal to the CPU or protected by a cryptographic hash stored inside the CPU.

The verification extends well beyond the familiar BIOS sanity checks that most people disable to reduce boot time. The CPU must ensure that no potentially malicious devices are present, perhaps by preventing further execution if the system contains hardware that's not on a list of preapproved devices.

After the hardware checks itself, it must verify that the main system software is correct. Because that code is generally too large for on-chip storage, the hardware uses more crypto hashes. The entire software image or just key components can be hashed, with the obvious space-versus-time tradeoffs.

Once you have a system running trusted software atop trusted hardware, you can be reasonably sure that the entire configuration will behave consistently, if not necessarily correctly. From that point onward, however, it must ensure that only trusted hardware is connected and only trusted software is loaded and executed, because a single exposure can lead to a successful exploit.

Based on a quick thumb check of the Trusted Computing Group's numerous papers and documents, I'd say the TCG's initial focus is on large organizations, both governmental and industrial, which can and must maintain tight control over their IT facilities. The cost of virus and worm episodes has become so great that any alternative seems preferable to the status quo.

Unfortunately, high security presents a considerable challenge for IT departments that normally deal with commodity hardware. It's feasible, if not easy, to keep thousands of systems running with fungible components. It's essentially impossible to maintain thousands of unique configurations with mutually incompatible components.

The Trusted Platform Module (TPM) specs describe methods to implement nearly any security model you'd like, but getting it right requires considerable up-front planning and rigorous follow through. That level of attention to detail may be feasible for large institutions, but the rest of us simply can't cope.

The NCSA numbers reveal how difficult the challenge will be. Only 6 percent of users thought their PCs had a virus at the moment, 44 percent claimed to have clean PCs, and half simply didn't know. The NCSA technicians actually found 25 percent of dial-up and 15 percent of broadband PCs had at least one virus. The average infection rate was 2.4 viruses per machine and the maximum was 213 (yes, on a single PC!).

Enforcing a security model on unwilling participants is a recipe for disaster.

Trouble in Paradise

Let's assume that all hardware must have a known provenance before it can be used by a trusted system. This may be feasible for embedded systems with tight access controls, but is probably impossible in the general case.

Suppose, for example, that the system uses a generic PC-style keyboard with a PS/2 interface. Each keyboard could include a unique identification number that would be accepted only by a specific CPU. Alternatively, all approved keyboards could include a common identifier acceptable to any system. The ID must be wrapped in crypto to prevent spoofing or unauthorized duplication.

But that's not enough. To maintain security, your keyboards must be tamper evident, if not tamper proof, to prevent the introduction of unauthorized hardware. Consider what would happen if someone added a hardware recorder either inside the keyboard case or between the keyboard and the system: They would capture all manual data going into the system.

There is simply no electrical way to detect a hardware keystroke recorder between the keyboard and the PC, which means once it's inserted, it won't be found unless someone specifically looks for it. Commercial recorders monitor the data stream on the PS/2 wires for the keystroke sequence that activates them, but that's entirely optional. Any decent hardware engineer could cook up a recorder that sniffs data from the wires, depends on external hardware for a data dump, and fits neatly on your thumbnail.

Here's an example of how much damage a simple logger can cause: In 2001 and 2002, Juju Jiang installed a software keystroke logger on PCs in various Kinko's copy centers, then retrieved the logs using the PC in his apartment. Several Kinko's customers had accessed their home PCs using a commercial remote-access program and Jiang's logger snagged their user IDs and passwords. He later used that information to access their PCs, open new accounts using their personal information, and transfer money from their existing accounts.

Jiang was only caught because one of the victims noticed his PC creating an account on its own, acting "as if by remote control" (which was, in fact, the case). There was enough evidence to trace the intrusion to Jiang, indict, prosecute, and convict him.

The same reasoning applies to any peripheral: If it's not mechanically secured and physically suborned, you must assume its data stream can be monitored, either instantly or stored for later recovery. Keyboards are the obvious weak spot, but USB, serial, and parallel ports provide nice access points. While a hardware exploit requires two intrusions, one to plant the device and a second to recover it, the actual data collection doesn't depend on easily discovered software hacks.

The specs for a Trusted Platform Module require tamper-resistant or tamper-evident packaging, but do not define the physical security requirements of the entire system. For widely distributed systems, whether PCs or embedded devices, it's safe to assume they have no physical security at all and that the hardware can be compromised at will. That makes internal security measures moot, as they can be bypassed with enough time and patience.

The Jiang case and the NCSA study both show that most people simply don't notice unexpected behavior, never mind surreptitious information leakage. Only half of the users said that their machines had adware or spyware programs, while NCSA technicians found 9/10 of dial-up and 3/4 of broadband PCs were infected. The average infected machine had 93 "adware/spyware components," whatever those are, and the worst-hit machine had 1059 (!) components.

Although the price of freedom may be eternal vigilance, it's obvious that we cannot depend on ordinary folks for sufficient attention to the details of secure computing.

Business Opportunities

My friend Aitch recently described an embedded-system business opportunity based on the convergence of two trends. While it's not exactly legal, there's excellent upside potential. Send me a suitcase of small bills if it works for you.

It seems there's a growing underground of automotive hotrodders who trick out their cars with all manner of performance-enhancing gadgets—turbochargers, superchargers, water injectors, you name it. They also modify the engine control computer firmware to further improve performance. As you might expect, these modified engines tend to poot out somewhat more unwanted emissions than their factory-stock brethren.

Many states or urban areas now include pollution measurements as part of their annual automobile inspections. Those measurements require complex dynamometers, expensive buildings, and trained technicians, but, now that all vehicles have on-board engine controllers, why not just take advantage of the information available through the standard OBD (On-Board Diagnostic) connectors in every car? If the engine controller reports no faults or unusual situations, the car passes the inspection and taxpayers save the cost of all that infrastructure. What's not to like?

Here's the deal. Stick a cleverly coded microcontroller on the back of an OBD connector and sell it to the hotrodders, who tuck the official OBD connector up inside the dashboard, put the gimmicked OBD connector in its place, and drive off to their car inspection. The microcontroller responds to the inspection station's inquiries with "Nothing to see here. Move along, move along."

Aitch suggests that $100 retail would move OBD spoofers off the loading dock at warp speed.

Many ESC speakers observed that security must be included in the overall system design and that it cannot be grafted on after the first exploit goes public. The OBD system started as a simple diagnostic tap but is becoming part of a legislated system with legal force behind it. If you think Windows has security flaws, think about that situation for a moment.

Pop quiz: Estimate the number of test instruments that must be replaced in order to accommodate trusted engine computers and inspection stations. Extra credit: Estimate a lower bound for the half-life of the existing engine population.

Back at home, it's widely agreed that Microsoft sells XBox gaming consoles at a loss to gain market share. Because MS also derives royalties from each unit of game software to offset the hardware loss, the XBox includes rather stringent controls on just what software it will accept, to the extent that it spits out unapproved CDs or DVDs. Consider it as a first approximation to a trusted computing platform.

Days after the XBox first appeared on the market, Bunnie Huang figured out how to bypass the boot-code protection hardware. His methodology, presented at the 2002 Cryptographic Hardware and Embedded Systems conference, should be required reading for anybody trying to distribute trusted hardware in a consumer device. Summary: It won't work.

During the ensuing arms race, someone discovered a security hole in the "save game" feature of several XBox programs: You could save a game to a memory card, replace the file with something completely different, and reload it. The contents of the saved file, perhaps a Linux boot image, then runs without catching the attention of any of the security gadgetry. Of course, that hole isn't present in current XBoxes.

Isn't it interesting that Microsoft patches economic leaks immediately, while leaving some security holes to languish for years?

Hash Happens

Most security measures depend on cryptographic hashes to ensure the integrity of large messages (or files or documents or whatever). A good hash algorithm has two key attributes: The probability that two messages will produce the same hash value ("collide") is low and the difficulty of finding such a colliding message is high.

Although their achievement didn't achieve mainstream headline status, a team of researchers recently published an algorithm that, given a message and its corresponding MD5 hash, produces a modified message with the same hash value. The fact that two messages can have the same MD5 hash value is not surprising. The trouble arises from being able to create a message with the same hash as the first. Moore's Law tells us that once something can be done, it will be done much faster in only a few years.

While this is not yet a catastrophic problem, it does show how you can get everything right and still have a security failure. If you've ever used an MD5 hash to verify a downloaded file, there's now the possibility (admittedly, a small one) that the file you get may not be the file you expect, even though the hash is bit-for-bit correct.

The Trusted Computing Group chose SHA1 as the TPM hashing algorithm long before news of the MD5 compromise appeared. There are speculations about cracking SHA1, but, for the present, it seems secure.

Pop quiz: Assume you had sold a few hundred thousand gizmos that validate new firmware downloads using MD5 hashes and signatures. How would you convert the units to use SHA1, given that attackers can spoof your files? Essay: Describe how you'll get your users to care.

Reentry Checklist

Two things are quite clear: Consumers can't be trusted with security decisions and hardware cannot be secured from customers. The former implies that any security measures must not require user intervention; the latter implies that you cannot depend on trusted hardware in the field.

Therefore, your overall system must be resilient enough to survive the failure of any defense and any unit. The XBox exploit shows how even a fairly complex security system can be penetrated with surprising ease. If all the units are identical, then one exploit is all it takes. Scary thought, eh?

The NCSA study is at http://www.staysafeonline.info/news/safety_study_v04.pdf with the summarizing press release at http://www.staysafeonline.info/news/NCSA-AOLIn-HomeStudyRelease.pdf.

The Trusted Computing Group is at https://www.trustedcomputinggroup.org/ home. The architecture document at https://www.trustedcomputinggroup.org/ downloads/TCG_1_0_Architecture_Overview.pdf mentions that keyboards and mice must include validation credentials.

Details on a typical keystroke logger are at http://www.4hiddenspycameras.com/memkeystrok.html. The documents for the Juju Jiang case are at http://www.cybercrime.gov/jiangIndict.htm and http://www.cybercrime.gov/jiangPlea.htm. It seems Kinko's now restores the drive images each week, which is a step in the right direction.

A quick overview of OBD-III can be found at http://lobby.la.psu.edu/_107th/093_OBD_Service_Info/Organizational_ Statements/SEMA/SEMA_OBD_frequent_ questions.htm. The usual searches will produce hotrodding information.

Andrew "Bunnie" Huang performed the initial Xbox hacks and reported his findings at http://www.xenatera.com/bunnie/ proj/anatak/xboxmod.html. Pay particular attention to the trusted-computing part of his presentation at http://ece.gmu.edu/crypto/ches02/talks_files/Huang.pdf. Microsoft countered the saved-game bug in short order with an automatic update, as reported at http://www.theinquirer.net/?article=11548.

More on MD5 and SHA1 hashing is at http://www.secure-hash-algorithm-md5-sha-1.co.uk/. The original paper describing the MD5 break is at http://eprint.iacr.org/2004/199.pdf.

"Trust, but verify" was President Reagan's take on nuclear disarmament. If you don't recognize "Nothing to see here," get a Star Wars DVD set and discover how much movie special effects have improved in recent years. "And Now for Something Completely Different" was a 1971 Monty Python video compilation.

DDJ


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.