# Natural Computing

Designing a complex power plant is impressive enough, but a power plant operates in a non-adversarial environment. Consider next the chaos of markets. Prices change based on emotions, news events, and perhaps market manipulations. How could there be any patterns in all this? Self-taught finance guys Jake Loveless and and Amrut Bharambe use genetic algorithms to design rules to help them trade treasury bonds. While engineers like Qualls use complex physical criteria to determine "fitness," Loveless and Bharambe use the most basic financial measures: high profit at low risk (variance). To determine whether a rule is good, they try it on historical data.

This is how this works. Suppose that the following rule tested well on historical data:

If the 1-minute slope of the traded price is >5%, and the 5-minute slope of the traded size is >10%, then buy.

and profitable rule #2 states:

If the 2-minute slope of the traded price is >5% and the 10-minute slope of the traded size is <20%, then buy.

A combined rule might randomly interchange components of each of the two and end up with rule #3:

If the 2-minute slope of the traded price is >5%, and the 5-minute slope of the traded size is >10%, then buy.

Sometimes they modify a profitable rule by shifting one of its values -- for example, changing the value of the 2-minute slope attribute in rule #2 from 5% to 10%, resulting in rule #4:

If the 2-minute slope of the traded price is >10% and the 10-minute slope of the traded size is <20%, then buy.

Loveless constrains his rule-finding algorithm to look for rules with ten or fewer "attributes" (like slope times, percent changes and so on). He does this for two reasons.

• First, a rule that contains thousands of attributes would be useless, because it might never or rarely recur and would not survive statistical robustness tests.
• Second, a shorter rule has at least some prospect of being understandable.

Armed with a set of rules that had done well on historical data, they went live for the first time in 2006. They turned the algorithm on at 9:00 AM on a Monday and ran it for a short time. The system bought and sold \$10 million worth of securities. "I was sweating. I used to bring two shirts to work for the first couple of months. It was exhilarating in a jumping-out-of-a-plane sort of way," says Loveless. But it worked and continues to work.

Could they have found these rules in some other way? Perhaps, but consider the numbers. Loveless and Bharambe started with 28,000 attributes and looked for rules having 10 attributes entails exploring over a trillion trillion trillion possibilities. The genetic algorithm doesn't explore all those possibilities either of course, nor does it guarantee to find the best one, but its combination of random exploration with selection has led to a profitable run even in the chaos of the last two years. Loveless admits that he barely understands much less could he have intuited the rules -- top-down design was just not an option.

For normal electronics, random change can be fatal. If a circuit connection loosens inside your laptop or mobile device, that usually prevents the entire device from working further. Bringing it back to health involves a trip to the repair shop. But now consider a robot on Mars. There is no repair shop. Astronauts have neither the time, nor the tools, nor the expertise to repair electronics. External help is literally millions of miles away. Jet Propulsion Laboratory roboticist Adrian Stoica has a radical proposal: let the hardware figure out how to repair itself.

Day and nighttime temperatures on the surface of Mars might vary from a frigid -133 degrees C (-207 degrees F) to a balmy +27 degrees C (+80 degrees F). Electronic connectors expand, contract, and often break. A person facing temperature variations changes clothes. A circuit can't cover itself, but perhaps it can work around failed circuits.

Stoica has designed and built flexible analog circuits that use a genetic approach to reconfigure the circuit after it is built to adapt to circumstances. The hardware that Stoica used is called a field-programmable transistor array, a two-dimensional, Manhattan-like grid of transistors and other components interconnected by other transistors acting as switches. The switches can be either closed (binary value 1), allowing current flow; or open (binary value 0), preventing current flow. Switches are under the binary control of the bit string (sequence of 0s and 1s) that defines the "genetic code" of the circuit.

Suppose the circuit that has failed is supposed to function as an amplifier. Through an evolutionary search, a program running on small separate control hardware can try various combinations of bit strings (candidate redesigns), thus changing the switch settings. The control hardware can also test the circuit with known inputs-outputs, essentially a regression test. If the reconfigured circuit passes the tests, then it is "repaired". If not, it receives a score, based on how close it gets; then the highest scoring bit strings are modified and recombined to yield new ones and the process repeats.

Admittedly, the control hardware itself might fail, so a future design might strive for a symmetric design in which any part of a circuit can be used to repair any other part. Stoica has been able to show that his circuits can adapt to temperature variations ranging from -180 degrees C to +120 degrees C (-292 degrees F to +248 degrees F), exceeding the variation of Mars.

The eventual goal is a hierarchy of adaptation. There will be some small closed loops where the system is adaptive at the cell level (like skin repairing a cut). And then if there are more failures, higher level reconfiguration will take over, giving degraded but still serviceable performance (like a person on crutches). The goal is to do the most local level repair possible, replacing parts only as a last result. The benefit will be to vastly reduce the amount of redundant equipment the spacecraft needs to take with it, reducing cost and making room for more scientific experiments.

 Self-Repair: A Quantitative Understanding Because you may remember "Dr. Ecco's Omniheurist Corner" column that I wrote for Dr. Dobb's, you might enjoy a short puzzle to help you understand the quantitative consequences of mutual-repairing in a safety architecture. Classically, space missions use redundancy to overcome failure. Every unit U is triplicated into physical components c1, c2, and c3. The net effect is that the unit can continue working provided at least one component is working. Let's try to understand how this could work in a simple power-distribution puzzle on a faraway planet. Suppose we want to send electricity from one building (building A) to another one (building B) 10 meters away in the hostile, cold Martian environment. We use three parallel cables each 10 meters long and one meter apart (see Figure 1). Within each building, the three cables are securely connected to one another. So the only problem is to ensure some uninterrupted circuit from one building to the other. Figure 1 The environment is so hostile however that every day, each linear meter of cable has a 1/100 chance of being broken, independently of every other linear meter. Once broken, a cable can't be repaired. Warm-Up: Given three cables, what is the expected number of days until the two buildings have no electrical connection at all? Hint: Treat each cable as consisting of 10 independent meter sections each one of which has a 1/100 chance of being broken. Solution to warm-up: A single break anywhere along a cable breaks it. So each day a linear meter has a 1 - (1/100) = 99/100 chance of not breaking. All 10 linear meters of that cable have a (99/100)10 chance of not breaking in a single day. However, once broken, a cable stays broken. Because I like to hack (and because I knew the next problem would be hard to solve in closed form), I did a simulation consisting of 1000 runs where each run simulated fail/no fail for each cable. Once a cable failed, the simulation recorded the day when it broke. The maximum fail day of all three cables was the result of the run. The arithmetic mean was 17.2 days (where the first day is day 0). End of Warm-Up Recognizing this problem, let us say that we lay a 2-meter long cable across the three cables at their midpoints (see dotted line). When a switch is set, electricity can go across the three cables. The switches are specially shielded, so breaks won't occur there. Now, electricity can go up the first five meters of one cable and proceed up the last five meters of the other cable. What is the mean time to failure now? Hint: The cross cable can fail, too. Imagine that there was no cross cable yet, but I offered you four meters of cross-cable and as many connectors as you wanted. Where would you put the cross-cables to get the greatest mean time to failure? I invite you to send me your best answers to this one. Solution I simulated this as six 5-meter cables, each having a 1 - (0.995) chance of failing each day. The cross cable had a 1 - (0.992) chance of failing each day as well. The reasoning is as follows: there is a connection provided at least one left most 5-meter segment is unbroken, one rightmost 5-meter segment and the cross-segment is unbroken. There is also a connection provided a single left-to-right cable is unbroken. Taking the maximum of those across each of 1000 runs led me to a mean time to failure of 20.6 days. —D.S.

### More Insights

 To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.

# First C Compiler Now on Github

The earliest known C compiler by the legendary Dennis Ritchie has been published on the repository.

# HTML5 Mobile Development: Seven Good Ideas (and Three Bad Ones)

HTML5 Mobile Development: Seven Good Ideas (and Three Bad Ones)

# Building Bare Metal ARM Systems with GNU

All you need to know to get up and running... and programming on ARM

# Amazon's Vogels Challenges IT: Rethink App Dev

Amazon Web Services CTO says promised land of cloud computing requires a new generation of applications that follow different principles.

# How to Select a PaaS Partner

Eventually, the vast majority of Web applications will run on a platform-as-a-service, or PaaS, vendor's infrastructure. To help sort out the options, we sent out a matrix with more than 70 decision points to a variety of PaaS providers.

More "Best of the Web" >>