Direct Data Synchronization
The above discussed techniques where an application is given sole access to data through a reference, so that most of the code has little need to worry about data update contention. In many cases, however, it is useful to have a more general method to deal with shared data directly, rather than through references. A simple example of this is an addition of a new element to a doubly linked list, where both forward and backward pointers to the new element need to be set simultaneously.
The second abstraction then involves direct update of shared data, rather than through intermediate pointers.
Reasoning about resource contention issues is difficult. The usual approaches for reasoning about contention require analysis of all possible multiple orderings of state data access and change relations between contending tasks. Here we discuss a higher level abstraction that can be based on "test and set" notions. My analysis here is just of static states and requires only an examination of static data consistency, rather than of the dynamics of multiple access and store orderings that achieve the consistent state.
The idea here is that reasoning about static consistency is far simpler than the complexity of reasoning about multiple possible sequences of data addition, deletion, change and access.
When it comes to Dynamic Synchronization of State Data Changes, a technical discussion of the requirements for correct programs based on an underlying memory model for C++ can be found in papers by Hans Boehm and Clark Nelson (see C++ Standards Committee notes JTC1/SC22/WG21 - N2171, N2239, N2300). These considerations are necessary for language designers and library authors. The abstraction there is based on reasoning about actions that change state data in terms of "happens before" and "happens after" relationships.
An approach to simplify this somewhat is given by Herb Sutter in ISO/IEC JTC 1/SC 22/WG 21 N2197 = ANSI/INCITS J16 07-0057, where several principles and rules are developed. The approach here is closer to what Sutter suggests.
However, discussions such as these are meant for language experts, compiler writers, and systems designers. A higher level abstraction, that can be built upon these concepts, is needed for application use.
In terms of Static Synchronization of State Data Changes, the model for consistency relies on a view of program transitions from one state to another through synchronization points. Program correctness here relies on verifying assertions that all externally visible state date is consistent and valid, before and after each synchronization point. The underlying library then implements the correct dynamic ordering operations to achieve these consistent states.
The support suggested here comes in two parts:
- An extended CAS operation for a Synchronization Element for single updates.
- A list of CAS and other atomic operations for modification of multiple non-contiguous items.
Extended CAS for Single Updates
Often it is useful to perform an atomic compare-and-swap on a contiguous set of data that is wider than the underlying system support for direct CAS instructions. Especially for some lock-free algorithms, it is necessary to atomically change a control element which contains pointers, counters, and flags. For example, to avoid the ABA problems cited above for a recycled reference, a change counter can be stored and incremented with the reference. Any update to the reference must also increment the counter.
Hence, the extended CAS suggested here is designed to compare-and-swap a Synchronization Element which can be an object of any width. To accomplish this, the compiler can either generate the CAS operation directly, or it can use an algorithm that maintains a pointer or other data reference to indirectly manage atomic update of the Synchronization Element. This also provides programs with cross-platform support that is not dependent on a particular processor instruction set.
A second extension is a Compare-and-Swap-Conditional. Here the data swap occurs only if both a simple Boolean value is True and the referenced data items compare equally. This operation is common in several lock-free algorithms.
Also, it is necessary that appropriate memory barrier instructions be issued based on values derived from the Synchronization Element.
Operations List for Multiple Updates
As for Shared and Exclusive Use Pointers, it is often necessary to perform a set of simultaneous updates transactionally; that is atomically in the sense that all or none of the operations are performed. This provides an important property of lock-free approaches; i.e., they are composable in the sense that an atomic operation can consist of a set of lower level atomic operations. In particular, the Install of an Exclusive Use Pointer can be included in the synchronized operations list.
This operation can be provided through a Synchronize function operating on a list of Extended CAS and similar atomic operations. Atomic operations here are a small library of basic expressions that can often be supported directly on platforms with appropriate processor instructions. The Synchronize performs an atomic transaction where:
- Either all the operations in the list are performed, or else none are. In the latter case, the Synchronize is flagged as failed.
- None of the intermediate states are observable by other tasks.
In addition to consistency of multiple data changes, it is often necessary that the synchronization is provided to ensure a consistent set of inputs before processing starts. Consistency here requires that:
- All input values are known to be consistent with each other at the time of the transaction's beginning.
- Results of the computation are consistent with each other and with these inputs.
This is often sufficient, since, in a changing world, the only guarantee that can be made about the results is that they were valid at some point in time.
Consistency checks of both inputs and outputs at the synchronization point would intuitively seem appropriate. However, this can be less efficient. Also it provides little additional benefit, since it only raises the question of how more useful the result would be, if the input assumptions change anyway immediately following the commit operation.
Input synchronization can be provided through a GetConsistent function which takes a list of data objects and returns a proxy or proxies for further access to the objects.
As it did for the Use Pointer transactions above, implementation requires multi-pass operations for both input and result consistency checks. Here a Shared Resource Manager is needed to track all Synchronization Elements, counters, and flags that are in contention.
Input synchronization occurs in two phases:
- A list is made of current values for the data objects requested.
- The list is then scanned to see that no values have changed, otherwise the process is repeated until a consistent set is found.
Commit operation is as follows:
- A first pass constructs an internal list and adds objects potentially in contention to the Shared Resource Manager.
- The list is then sorted by some criteria, so that all transactions process common elements in the same order.
- Next an attempt is made by the transaction to conditionally claim each item in the list. A claim here is just a flag and doesn't change the value of the object yet. If another transaction has already made a claim, this or the other transaction is marked as failed. However, if the other transaction is already marked as committed, then this transaction needs to fail.
- If all the items can be claimed and the transaction has not been marked as failed, the transaction is marked as committed and all the atomic actions in the list can then be executed. These actions can be executed by the committing task, or, if the task is pre-empted, then by other tasks that require access to the shared objects.
Access to a shared object which is marked as "claimed" involves the following:
- If the claiming transaction is marked as failed, use the old value.
- If the claiming transaction is marked as committed, attempt the atomic action and, if successful, use the new value.
- If the claiming transaction is still in the process of committing, continue the list processing of the claiming task until it is either committed or failed. Then proceed as above.
- Finally, insure cleanup of the transaction, if this operation is for the last claim.
Again various policies can be used here to decide upon which task to fail, but with due regard to preventing system thrashing.