Serial Control Flow Patterns
Early serial programming languages had a similar structure to the underlying machine language control flow mechanisms, with commands being executed in sequence but with the ability to jump or goto a different point in the sequence. It was soon realized, however, that indiscriminate use of a goto led to unreadable and unmaintainable programs. Structured programming limited control flow constructs to a small, composable set with desirable properties. Structured control flow constructs also make it possible to assign coordinates and locally reason about a program [Dijkstra1968] .
Not all of the serial patterns described in this section are actually needed to allow universal computation. Two common universal subsets of the following patterns lead to the classes of imperative and functional programming languages. In general, either recursion or iteration can be chosen (but both are not required), and likewise for dynamic memory allocation and random write.
Two tasks are considered to be in sequence if the execution of the second task may only begin once all operations in the first task have completed, including all memory updates.
The selection pattern executes one of two tasks based on the result of a Boolean-valued condition expression (whose evaluation constitutes a third task).
The iteration pattern repeats a task (its "loop body") until some condition is met. Every invocation of the loop body is finished before the next one is started, as if they were in sequence. Evaluation of the condition is also executed in sequence with the loop body.
Memory locations that are both read and written by a loop body are called loop-carried dependencies. While a loop-carried dependency is often used to compute an index, the loop index can also be considered to be a natural part of the looping pattern itself. Many loops can be parallelized even if they have loop-carried dependencies.
D. Functions and Recursion
Sequences of operations can be stored in functions which can then be invoked repeatedly. Functions are parameterized by input data that is used for a specific activation of the function, and have their own local state. Functions compute values and return them. A pure function cannot modify its inputs or cause other side effects, such as modification of memory.
Functions can have their own local storage. If fresh locations for this local storage are allocated with every activation of the function, then it is possible for a function to be called recursively. Such recursive functions lead to a divide-and-conquer algorithmic pattern.
Serial Data Management Patterns
The random read and write data access pattern maps directly onto underlying hardware mechanisms. Stack and dynamic memory (heap) allocation are common higher-level patterns.
A. Random Access Read
Given an index a random access read retrieves the value associated with that index. Arrays are the simplest form of randomly-accessible memory, but all other data structures can be built on top of them: a physical computer's RAM is nothing more than a large 1D array of fixed-length integers.
Indices can be computed and can be stored in other memory elements. The former property allows for the construction of higher-level patterns such as stacks while the ability to store indices in memory allows for complex data structures based on pointers (which are just abstractions for indices) .
B. Stack Allocation
A stack supports a last-in first-out (LIFO) model of memory allocation, which is often useful in nested function calls for supporting allocation of fresh copies of local state. Stack allocation has excellent coherency properties both in space and time.
Stack allocation can take place on serially nested activations of functions for local variables.
C. Dynamic Memory (Heap) Allocation
When the LIFO model of memory allocation supported by the stack is not suitable, a more general model of memory allocation can be used that allocates a fresh memory location whenever requested.
D. Collections and Data Abstraction
In addition to simple arrays, collection abstractions can include nested arrays and databases. A nested array can hold a collection of other arrays inside it. The subarrays can be of a fixed size or can vary in size. The former type of nested array we will call a partitioned array; the latter we will call a segmented array.
One-dimensional segmented arrays and operations on them can be implemented efficiently using auxiliary index and flag arrays [Blelloch 1990]. Likewise, partitioned arrays can be represented and operated on efficiently using a packed representation for the data itself and auxiliary data structures to track the partition boundaries.
A database maintains a set of elements, and supports search operations. Specifically, elements can be found by partial matches based on their content.