Ml and Colored Petri Nets for Modeling and Simulation

Peter uses ML, a Lisp-like little language, to stimulate and investigate multiprocessor architectures.


September 01, 1991
URL:http://www.drdobbs.com/tools/ml-and-colored-petri-nets-for-modeling-a/184408621

Figure 1


Copyright © 1991, Dr. Dobb's Journal

SEP91: ML AND COLORED PETRI NETS FOR MODELING AND SIMULATION

ML AND COLORED PETRI NETS FOR MODELING AND SIMULATION

A little language for a big job

Peter D. Varhol

Peter is a freelance writer and an assistant professor of computer science and mathematics at Rivier College in New Hampshire. He can be contacted through DDJ.


In a recent operating systems course, one of my graduate students constructed an analytical model of various hardware configurations running a multiprocessor computer system. To her surprise, a configuration with a single fast processor produced a shorter average waiting time for jobs than the same processor running in tandem with a much slower processor (although intriguingly, the number of jobs processed was higher with the latter). Furthermore, two processors of the same power running in tandem produced a shorter average waiting time than the two unbalanced processors. I confirmed that the mathematics were right, and we found supporting evidence in a textbook, but it was not the intuitive answer. Therefore, I decided to build a model of the two configurations and simulate the flow of jobs through the processors.

The language I chose to develop the model was ML (short for Meta Language), a functional programming language developed at the University of Edinburgh by a group headed by Robin Milner. An ML specification emerged in 1987 to become Standard ML. Due to its interactive nature, it is an interpreted language. Interpreters were developed at the University of Edinburgh for Unix machines and at AT&T Bell Laboratories for the VAX and 680x0 Unix computers. Meta Software has ported the Edinburgh interpreter to the Macintosh (the platform I used to develop this article), and makes use of it in their software tools.

An additional reason I chose ML is that I had some experience working with Meta Software's family of formal design tools, which makes use of ML as the underlying language for customizing and assembling a Petri net model or simulation. In particular, Design/CPN (for colored Petri nets) enables a designer to express much of the model in terms of a graphical representation, with an extended version of ML used to make certain declarations and set conditions within the model.

ML Characteristics

Like most functional languages, ML is not a large language. It makes use of lambda calculus, which contains only variables, function expressions, and function applications. In ML, functions are treated as first class data objects, and it is possible to perform such operations as passing a function to a function (see Table 1). All computations that are possible in other languages are possible using this formalism. I liked what I had read about ML because its formalism had a mathematical logic to it that made programming seem more like solving a puzzle than doing real work.

Table 1: ML commands and syntax

  Variable Declaration
  val x= 5     (integer)
  val y = 2.7  (real)
  x + y        (error -- type mismatch)

  Negation
  val a = ~3

  Function Definition
  fun square x (x: int) = x * x   (integer argument and function)
  fun square x (x: real) = x * x  (real argument and function)
  fun square x = x * x            (error -- type cannot be deduced)

  Integer to Real Conversion
  real (5)

  Real to Integer Conversion
  floor (2.7) (rounds down to nearest integer)

  Boolean Expressions
  >   greater than
  <   less than
  =   equal to
  <>  not equal to
  <=  less than or equal to
  >=  greater than or equal to

  Equality Operator for Real Numbers
  x == y (to account for approximate representations of real numbers)

  Conditional Expressions
  fun abs x = if x >= 0 then x else ~x

  Conditional Conjunctions
  fun range x = 0 <= x and also x <= 99

  Characters and Strings
  chr 97    (returns the character "a")
  ord "a"   (returns the ASCII value 97)
  "string"  (string representation)

  Tuples
  (12,3)         (a pair)
  divide (12,3)  (returns 4)

  Lists
  [5, 2, 7]  (three-element list)

  Datatype Binding
  datatype SEASON = spring | summer | autumn | winter (enumerated type)

To me, ML is like Lisp with type checking. In both languages it is natural to express a large number of operations as function declarations. However, in Lisp it is possible for any variable or function to accept any type of value; the only type errors come from type mismatches or improper operations on types. ML, on the other hand, is a strongly typed language.

In Standard ML, type errors can arise from passing a function an incorrect type argument, for example, because all functions have types associated with them. One important advantage in ML is that there is rarely a need to declare variable or function types. This is because its type-checking mechanism is able to deduce the type of many variables and functions by their composition.

For example, the assignment val x = 3 will return int, because the value passed to x is an integer. Likewise, fun double x = 2 * x is expecting to receive an integer and return an integer, because I used the integer 2 rather than the real 2.0. To make it a real function, I would have to say fun rdouble x = 2.0 * x. Many of the predefined arithmetic functions in Standard ML are polymorphic, so that these functions can accept different types as inputs.

I confess I prefer to work in a typeless language such as Lisp, because declaring types has always seemed to me a waste of time. However, there are certainly advantages to a strongly typed language in writing and debugging correct code. A strongly typed language without the need for declarations is to me the best of both worlds.

Like Lisp, ML has the concept of lists. There are two types of lists: the list and the tuple. The concept of the tuple in ML is the same as its mathematical definition. It is a list of set length, but the data types of its members can be different. A list, on the other hand, can be of any arbitrary length, but the data types of all its elements must be the same. For this program, I used the list construct to store random numbers. The ML function map acts like the Lisp mapcar, which applies an operation consecutively to all elements of a list.

Applying ML to a Multiprocessor Model

The multiprocessor computer system model was quite simple. I used functions representing each processor, plus a common queue (see Listing One, page 81). The queue was simply a random number generator that placed values into a list. When each of the functions had finished processing the current job, they removed the next value from the queue. Each processor function called the queue function to generate a new job. The new job consisted of a random number that determined how long the processor was to spend computing that job. I varied the relative speed of the processors by multiplying that random number by the number of times the slow processor was slower than the other processor. For example, multiplying the random number received by the slower processor by three made that processor act three times slower than the other one. This model consists of some functions adapted from Wikstrom's textbook on the subject (see "References").

There is no predefined random number generator in ML. I considered and rejected an example provided in Wikstrom's book as inadequate for generating a large number of statistically correct random numbers. Instead, I used the congruence method, and Xn + 1 = 7 ** 9 * Xn (mod 10 ** 10), with X[0] the seed value. Schaum's volume on numerical analysis indicates that this equation will generate 5.1 to the seventh power, or close to 90,000, pseudorandom numbers with acceptable statistical properties and a fairly uniform distribution before repeating.

There is also no way of accessing the system clock from Standard ML. Therefore, I could not use the random number in the processor function to signify an absolute period of time to execute jobs. I couldn't szy, for example, wait three seconds, then go to the random number queue again. Instead, I had to use relative times. Each processor took the job generated into the random number list and counted down to 0 before taking another random number from the queue. This approach meant that the faster processor received proportionally more jobs than the slower one, which is what would happen in real life.

The last of the ML functions computed system statistics. These included the mean and standard deviation for the total number of jobs processed, waiting time in the system (I did not distinguish between system and queue because the queue was just a list of pseudorandom numbers) and jobs processed per processor. These statistics were used to support the results of the analytic model.

One concern with this model is that I used a uniform distribution with a range of 10 (in other words, the longest job was ten times longer than the shortest one) for processing time required by my simulated processes. I have no idea if this accurately represents the distribution of job times in a computer system. Certainly, a part of it has to do with the type of work being done. If any reader has information on this subject, I would like to know it.

Using Meta Software's Design/CPN with ML

Petri nets, developed in the early 1960s by Carl Adam Petri, are defined as a four-tuple, C = (P,T,I,O), where P is a finite set of places, T is a finite set of transitions, I the input function (mapping to transitions), and O the output function (mapping from transitions). A place can contain one or more tokens, which signify the marking of that place. If all input places to a transition have markings, then that transition is enabled, and can fire, transferring tokens from the input places to the output places.

As an example, Figure 1 illustrates the Petri net that I used to model the multiprocessor computer system. Petri nets are especially appropriate for modeling processes, as opposed to states, because the logic that can be codified into the transition can simulate complex processes. Here, I am more interested in the process of jobs being run in the computer system rather than what state the system is in at any given time. Unlike most other modeling methods, Petri nets support the formal analysis of many conditions, such as reachability and verifiability, which help demonstrate the correctness of the model and make the simulation process possible.

The colored Petri net model is an extension of standard Petri nets that use colors to designate the different types of data a place can hold. Colored Petri nets can be formally reduced to standard Petri nets, so any provable conclusions concerning the latter can also be applied to the former. In my model, all processes (markings) can be routed to either processor, so there was no need to specify colors as a way of directing jobs to one processor or another (although that might be useful in examining different job-scheduling algorithms).

An extended version of ML, called CPN ML, is embedded in Design/CPN and is used by the programmer to declare types with colors, specify initial markings on places, and define guards on transitions. In addition, each transition can have an attached code region, which executes whenever the transition is fired. CPN ML is also used in the algorithms built into Design/CPN to determine the enabling and occurence of bindings resulting from a simulation run.

Developing the Petri net model itself was easy. The ready queue is one place, each of the processors are two other places, and two terminal places signify a completed job. Two transitions passed off jobs to the processors, while two more transitions connect the processors to the completed job places. Design/CPN uses an extension of Petri nets that supports the concept of time, so I was able to incorporate the amount of time it took for each process to finish a job without resorting to the jury-rigged method described earlier. A guard on each transition passing jobs off to the processors simply delayed it from firing again until the amount of time specified by the random number expired.

There are several reasons why this version of the model is preferred over the ML-only version. First, it was easier to develop. In addition to the graphical representation, it took only a few lines of ML to specify the parameters of the system. Less knowledge of ML was required, although correspondingly more knowledge of Design/CPN was necessary. Secondly, it is possible to develop models of complex interacting systems with more detail and greater accuracy using a graphical representation. The accuracy comes from Design/CPN's use of Petri net formal logic to ensure certain aspects of the consistency and correctness of the system. More detail can be achieved with hierarchical Petri nets, which I didn't use. Lastly, seeing the model in action is psychologically more reassuring than trusting entirely to equations. It provides a reality check that the simulation is actually operating as it was intended.

Products Mentioned

Design/CPN Meta Software 150 Cambridgepark Drive Cambridge, MA 02140 617-576-6920

And the Results of the Simulation...

The simulation supported the results of the analytical representations of the multiprocessor systems. In fact, we found through further analytical methods that in terms of time in the system, given a heterogeneous two-processor system in which one processor is at least three times faster than the second, the waiting time is longer than for an equivalent power system of identical processors. The reason is that the throughput of the second processor is so slow that it drags down the average waiting time of the entire multiprocessor system to the point where it just cannot keep up with the other configurations.

Nevertheless, all the multiprocessor systems tested outperformed several separate computers of equivalent total power with separate job queues, thus demonstrating the utility of multiprocessor systems in general. I confirmed this through simulation with the aforementioned models.

References

Design/CPN: A Reference Manual. Cambridge, Mass.: Meta Software, 1991.

Trivedi, Kishor. Probability and Statistics with Reliability, Queuing, and Computer Science Applications. Englewood Cliffs, N.J.: Prentice-Hall, 1982.

Wikstrom, Ake. Functional Programming using Standard ML. Englewood Cliffs, N.J.: Prentice-Hall, 1987.



_ML AND COLORED PETRI NETS FOR MODELING AND SIMULATION_
by Peter D. Varhol


[LISTING ONE]



(* A list generator for the processes *)

fun listGenerator start next 0 = nil
    | listGenerator start next n = start ::
    listGenerator (next start) next (n - 1)


(* Random number generator.  Uses the congruence method. *)

fun random seed = let val x = (power 7.0 9.0) * seed mod (power 10.0 10.0)
            in x - real(floor x)
            end;
val seed = random seed;


(* Seed value for the random number generator.  This number should *)
(* be changed for every simulation run to produce a different random *)
(* number list. *)

val seed = .5


(* Functions to produce the list of random numbers.  Calls the list *)
(* and the random number function, and places the results in randomList. *)

fun proclist n seed = listGenerator seed random n;
fun ranint a b c = floor(x * real(b - a + 1)) + a;
fun randomList = x yx z seed = map(randint x y) (proclist z seed);


(* Part of a sample processor in the multiprocessor system.  CountDown *)
(* is a recursive function that serves as the relative time clock. *)

fun processorOne randomList = countDown (map(randomList))
                statistics randomList;


Copyright © 1991, Dr. Dobb's Journal

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.