The Science of Complexity

Complex systems change in time, making empirical validation of theoretical concepts a difficult process. To get around this problem, David and Neil present a computer model that includes all the crucial properties of complex competition, frustration, feedback, and adaptability.


October 01, 2002
URL:http://www.drdobbs.com/cpp/the-science-of-complexity/184405169

Oct02: The Science of Complexity

David is a member of the Mathematical Finance Group at Oxford University and can be reached at [email protected]. Neil is a theoretical physicist running a research group in the Physics Department and a faculty member at Oxford University. Neil can be reached at [email protected].


You are leaving work for the commute home and want to get there as quickly as possible. There are two possible routes, Route 0 and Route 1. Which should you choose? When it is 3:00am and streets are empty, one is as fast as the other. But at 6:00pm, hundreds of other people are trying to do exactly the same thing. Since all commuters remember which route was least crowded on the previous few nights, each one tries to second-guess the others, hoping to find the least crowded route. However, if too many people choose the same route, that choice becomes the worst to make.

Sound familiar? It should. Although everyone knows which route was crowded on previous nights, the reason we don't all make the same decision is that we're free agents. Therefore, we tend to analyze the same information in different ways and come up with different conclusions.

Such situations represent a game in which the players (agents) use limited information about past outcomes to choose between two options. Unlike the two-player games made famous by John Forbes Nash, Jr. in A Beautiful Mind, however, there are many players. In addition, the game is repeated and the players try to learn from their mistakes. There is no possibility of coordinating a compromise solution by phoning around—information is limited and global—and there is no local communication channel available.

Such problems typify concepts inherent in so-called "complex systems" and the general topic of complexity. We now realize that complexity is of great importance across a wide range of applications, where the issue of reliable control of the overall system is of utmost importance. A crash in the stock market, for example, or jam in a computer network or on a road can arise as a by- product of the complexity within the system itself—feedback effects at the microscopic level give rise to macroscopic, collective effects that are generated inadvertently. Hence, collective behavior arises even though no one actually programmed it into the system. It is a crowd effect.

Complexity also represents a new science. As opposed to much of 20th century scientific thinking, which was reductionist (that is, geared toward breaking objects apart to understand them), the idea offered by complexity is that more is different. In the framework of our commuter, representing a "two's company, but three's a crowd" scenario, new collective phenomena (such as stock market crashes) emerge as the number of components/players increases. These collective phenomena cannot be understood by analyzing the individual components/players themselves (one car or trader, for instance). Complexity is, therefore, of great interest across a wide range of disciplines—anywhere where there is a limited resource to be distributed, and lots of objects (people, data-packets, processes, cells, and the like) clamoring to win that resource.

So how should complex systems managers—say, financial regulatory bodies in the case of a market—manage complex systems to reduce the occurrence of large changes (extreme events) and/or limit their potentially catastrophic consequences? Too much intervention could prove costly in terms of the monitoring resources that are required; too little, and the system can be brought to its knees while no one is looking. So what works best? This is a fundamental question facing next-generation information technologies and society in general. It also lies at the heart of the quest to understand complexity.

In this article, we'll describe a general computer model (game) that provides a paradigm for complex systems. The source code that implements a multiagent simulation of this system is available electronically; see "Resource Center," page 5. The model includes the crucial properties of competition, frustration, feedback and adaptability, and also generates extreme events in an apparently spontaneous way.

Let the Games Begin

Since our game is expressed in terms of binary digits, it is easy to understand using concepts from information theory and computation. (This binary feature sets these simulations apart from previous microsimulation techniques, and from mathematical models where the characteristics of a population are averaged together.) Figure 1 summarizes a game in which a population of N agents compete for a limited resource such as road space. Each agent is equipped with a limited number of strategies and can choose between them adaptively. A strategy is a forecasting rule that generates a prediction of the next winning outcome based on knowledge of the recent history of winning choices.

At timestep t, each agent (market trader) decides whether to enter a game where the choices are options 0 or 1. Agents only play if they are sufficiently confident of winning. Each agent exhibits bounded rationality: He identifies the optimal strategy from within his own limited set as being the one that has performed best in the past. Because of the limited global resource, a maximum of L(t) agents can win at each timestep. For example, there is an opportunity in the market for L(t) possible buyers, capacity in a data network for L(t) data-packets, or room on a computer system for L(t) users. If the number choosing option 1, n1(t), is less than or equal to L(t), then 1 is deemed to be the winning choice (to be a buyer in the market). Changing L(t) affects the system's quasiequilibrium; hence, the resource level can be used to mimic a changing external environment (a macroeconomic effect in the case of a financial market). The excess demand D(t)=n1-n0 (which mimics price-change in a market) and number V(t)=n1+n0 of active agents (which mimics the volume of market orders) represent output variables. These two quantities fluctuate with time and can be combined to construct other global quantities of interest for the complex system of interest (summing the price changes gives the current price).

The resulting time-series appears random, yet contains subtle temporal correlations. In moments of crisis, the temporal correlations can be strong enough to produce a crash. If 0 denotes sell, then a sequence of outcomes with a high percentage of 0s corresponds to a price movement that is mostly down—a financial market crash as in Figure 2. This is a purely collective effect since it is neither engineered nor initiated by any one particular agent.

Down to Details

The system divides itself into a number of objects:

Each agent randomly picks q strategies at the beginning of the game, with repetitions allowed. The agents are heterogeneous since they have different strategy sets in general. Each agent has knowledge (memory) of the past m winning outcomes. Since each outcome is 0 or 1, there are P=2m possible history bit strings. Consider m=2, where the four possible history bit strings are 00, 01, 10, and 11, with the least-significant digit representing the winning choice at the last timestep. This can be represented in decimal form =(0,1,...P-1) and is constructed from the history of winning outcomes w(t), as in Example 1(a). As each new winning outcome is announced, the dynamics corresponding to the transitions between these history bit strings can be represented on a De Bruijn graph (Figure 3).

Strategy

An agent decides which option to choose based on the prediction of a strategy, which consists of a response to the history (t). At each timestep, an agent looks at his set of q strategies and chooses the strategy that performed best in the recent past. With m=2, each strategy can be represented by a string of P=4 bits [ijkl], with i,j,k,l=0 or 1 corresponding to the decisions based on the histories 00,01,10, and 11, respectively. For example, strategy [0000] always corresponds to picking option 0 irrespective of the m=2 bit string, while [1111] corresponds to always picking option 1. [1010] corresponds to picking option 1 given the histories 11 or 01, but picking option 0 given the histories 10 or 00.

Each strategy is created at the beginning of the game by randomly filling a bit array of length P with 1s or 0s.

for(i=0;i<P;i++)

if(random()>0.5)

strategy->SetBit(i);

where random() generates a random number between 0 and 1. A strategy prediction is obtained by indexing the bit appropriate for the current history ; for instance, strategy_prediction= strategy->GetBit(mu).

The recent success of a strategy can be measured by the number of correct predictions made over a rolling window of length T. After each turn, the points for a strategy are updated, with one point awarded for a correct prediction, and zero for an incorrect prediction. The total point score S(t) is given by Example 1(b), where a(i)=strategy prediction, and lies in the range 0<S(t)<T. The points scored by a strategy during this period are stored in the bit array baS and can be updated where S is the total point score and Si is the score increment; see Listing One. Agents use the number of points of their strategies to determine if they are sufficiently confident to play at a given timestep and, if they do play, which of their q strategies to use.

Agent

Agents are of limited, yet similar, capabilities. Each agent is assigned a memory m—the length of the past history bit string that agents can use when making their next decision. At the beginning of the game, each agent randomly picks q(>1) strategies, making agents heterogeneous in their strategy sets. Agents are only adaptive if they have more than one strategy to play with. This initial strategy assignment is fixed from the outset of each simulation and provides a systematic disorder that is built into each run.

Each agent has a threshold probability level , which mimics a confidence level. Only strategies having r points are used, where r=T. We call these "active strategies." Agents with no active strategies within their set of q strategies do not play at that timestep and become temporarily inactive. Agents with one or more active strategies use the one with the highest point score; any ties between active strategies are resolved using a coin toss. Listing Two shows the process of determining an agent's best strategy, where pStrat is an array of pointers to the agent's strategies. This function returns the number of the agent's best strategy: between 0 and q-1, or -1 if the agent is inactive at that timestep (does not have sufficient confidence to take part).

Bringing It All Together

At each timestep, a common bit string of the m most-recent outcomes is made available to the agents by the game object. This is represented in decimal form by and is the only information agents can use to decide which option to choose at the current timestep. Agents then submit their individual predictions of the winning outcome to the game object. This object aggregates the agents' predictions, calculating n0 and n1, and determines the winning outcome using the rule:

w(t)=H[L(t)-n1(t)]

where L(t) is the resource level and H[...] is the Heaviside function. If L(t)=n1(t), indicating no clear winning option, this value is replaced with a random coin toss. The global information m(t) is updated by dropping the first bit and concatenating the latest outcome to the end of the history bit string.

A game can be specified with just five parameters: m, N, q, a time horizon T over which strategy points are collected, a threshold probability level to play at a given timestep, and a resource level L(t). This resource level is typically some fraction of the number of players; L(t)V(t), for instance. To start the simulation, you have to seed the initial history with m random winning outcomes. To ensure that you remove any transient effects due to this initial history and the initial adjustment of strategy scores, you discard a period of time at the beginning of the simulation.

Reducing the Strategy Space

Because of the feedback in the game, any particular strategy's success is short lived. If all agents begin to use similar strategies and make the same decisions, such a strategy ceases to be profitable. The game can be broadly classified into three regimes where the number of strategies in play is:

In total, there are Q=2P possible strategies that define the decisions in response to all possible m history bit strings. Figure 4 is referred to as the "full strategy space" (FSS).

If the number of strategies in use amongst the agents Nq is greater than Q, it is advisable to consider just one central strategy space. This can be updated globally, making the game implementation more efficient and removing the likely duplication of strategies at the agent level. In the case of a FSS, we define one 2D bit array of dimension P×Q containing all the possible strategies, which can be populated using a binary counting algorithm. Agents now randomly pick q integers in the range 0 to Q-1 to decide their strategies. However, for large values of m, FSS implementation is not possible, and the strategies must be generated on a per-agent basis.

Interestingly, the principal features of the FSS can be reproduced in a smaller "reduced strategy space" (RSS) of Q=2m+1 strategies, wherein any two strategies are either uncorrelated or anticorrelated (for more information, see "On the Minority Game: Analytical and Numerical Studies," by D. Challet and Y.C. Zhang at http://xxx.lanl.gov/abs/cond-mat/9805084). This provides us with a method of increasing the value of m, while limiting the explosive growth in size of the FSS. A RSS can be generated using Listing Three.

It is clear from this example that each strategy has an anti-correlated pair in the strategy space—a strategy that produces the opposite prediction for every possible history. Recording information from the anticorrelated strategies is redundant, since their predictions and strategy scores can be recovered from the anticorrelated partner. In Listing Four, anticorr returns the index of the anticorrelated strategy. This depends on the ordering of the strategies within the strategy space, as in the case of the RSS example; see Listing Five.

You can thus reproduce the dynamics using a space of just P strategies. Reducing the size of the strategy space is advantageous because it reduces memory requirements during simulation and also increases the speed of the simulation.

Applications and Extensions

There is no unique model for such agent-based complex systems, and there are many ways to construct a simulation that exhibits nontrivial dynamics. Typically, such models share the following common elements:

We have described a model incorporating these features, but it can be easily extended to include:

It turns out that one of the hardest issues facing complex-system science is the empirical validation of theoretical concepts against real-world data. Complex systems are, by definition, complex—they change in time, making it difficult to know whether one has a representative dataset, and whether it is sufficiently large. Hence, there is a great desire to study a complex system (or candidate complex system), which has associated with it a reliable and very long dataset. In this respect, financial markets are wonderful examples. Their movements are continually being recorded, not only by regulatory bodies but also by private data-supply companies. In short, they represent arguably the longest-running, most accurately recorded, and largest-scale human interaction study in the history of civilization.

Of course, the Holy Grail in financial markets is to identify some kind of warning signal associated with upcoming large changes. Such large changes represent successive timesteps of movements in one direction—most famously, down! Using multiagent simulations of the type described here, we replaced the simulated series of outcomes (price movements) with a real series that had been converted into a binary sequence corresponding to up/down movements. These then became the history of winning outcomes w(t), and the strategies were scored accordingly. We then compared a fraction of the best performing agents with the actual series. At each timestep, a survey of the next winning outcome was taken amongst the agents that had successfully identified the previous market movements. Their net prediction was used to form our best guess of whether the market would rise or fall at the next timestep.

Figure 5 shows the result of our studies, in terms of profit attained, for the foreign exchange market of U.S. Dollar versus Japanese Yen over the period 1991-93. A simple trading strategy was employed each hour: Buy Yen if the game predicts the rate to be favorable and sell at the end of each hour, banking any profit. The performance from the model is superior to just holding a long or short Yen position for the entire period. While the patterns picked up by the multiagent population are real, transaction costs in the actual market may prevent this profit from being achieved. Nonetheless, this remarkable result has motivated us to start looking at predictions over several timesteps, using the infrastructure of the Oxford Centre for Computational Finance (http://www.occf.ox.ac.uk/). The research for the Holy Grail is, therefore, well and truly underway.

DDJ

Listing One

if(win==strategy_prediction)
    Si=1;
else
    Si=0;
S += Si - baS->GetBit(iSindex); //# strategy points
if(Si==1);
    baS->SetBit(iSindex);
else
    baS->ClearBit(iSindex);
if( ++iSindex == T)            //update index
    iSindex = 0;

Back to Article

Listing Two

nas=0;      //number of active strategies (<=q)
max=0;      //maximum number of strategy points
for(i=0;i<q;i++) {
    S=pStrats[i]->points();     //# points 
    if(S>=r) {                  //determine if strategy is active
        if(S>max) {             //check if it's the best
            nas=1;
            max=S;
            work[0]=i;          //temp array to record strategy
        }
        else if(S==max) {       //we have a tied situation
            work[nas]=i;
            nas++;
        }
    }
}
if(nas==0)                     //no active strategy
    return -1; 
else if(nas==1)                //a distinct active strategy
    return work[0];
else {                         //tie-break situation
    i=(int)(nas*random());
    return work[i];
}

Back to Article

Listing Three

m_iQ = (int)pow(2,m_iM+1);
d = 1;
//m_SSpace[history bit]->[strategy number]
m_SSpace = new CBitArray*[m_iP];
for(i=0;i<m_iP;i++)
    m_SSpace[i] = new CBitArray(m_iQ);
m_SSpace[1]->SetBit(1);
for(i=1;i<m_iM;i++) {
    d *= 2;
    for(j=0;j<d;j++) {
        for(k=0;k<d;k++) {
            if(m_SSpace[j]->GetBit(k)==1) {
                m_SSpace[j+d]->SetBit(k);
                m_SSpace[j+d]->SetBit(k+d);
            }
            else 
            m_SSpace[j]->SetBit(k+d);
        }
    }
}
d *= 2;
for(i=0;i<d;i++)        //and mirror into bottom half
    for(j=0;j<d;j++)
        if(m_SSpace[i]->GetBit(j)==0)
            m_SSpace[i]->SetBit(j+d);

Back to Article

Listing Four

if(i<P) {
    strat_prediction = m_SSpace[mu]->GetBit(i);
    strat_points = S[i]; }
else { 
    i = anticorr(i);
    strat_prediction = 1 - m_SSpace[mu]->GetBit(i);
    strat_points = T - S[i]; }

Back to Article

Listing Five

int anticorr(int i) {
    return 2*P - 1 - i;
    }

Back to Article

Oct02: The Science of Complexity

Example 1: (a) History bit strings; (b) total point score S(t).

Oct02: The Science of Complexity

Figure 1: Complexity game where N agents are competing for resources.

Oct02: The Science of Complexity

Figure 2: Simulated market crash based on the collective effect of agents.

Oct02: The Science of Complexity

Figure 3: De Bruijn graph where each new winning outcome is announced.

Oct02: The Science of Complexity

Figure 4: Full strategy space.

Oct02: The Science of Complexity

Figure 5: Performance of the model strategy was superior to holding a long or short Yen position for the entire period.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.