# Does Parallelism Affect Determinism?

If you have two threads executing the same code currently on separate cores using the same input but producing different results, is that code nondeterministic?

Cameron and I recently had a discussion with a colleague that described his program as such. We thought the use of the term "nondeterministic" for the situation he was describing was a bit odd....

First of all what does "determinism" and "nondeterminism" mean? According to the NIST (National Institute of Standards and Technology) a "deterministic algorithm" is "an algorithm whose behavior can be completely predicted from the input." They go on to say that "*... Each time a certain set of input is presented, the algorithm does the same computations and gives the same results as any other time the set of input is presented."*

Of course this is what we want, our colleague wanted, any programmer wants. And that single answer is the correct answer. There can be only one! It has functional correctness. We would not like our programs producing different outputs for 1 + 1, sometimes its 2 and other times its 4.

What about a nondeterministic algorithm?

*A conceptual algorithm with more than one allowed step at certain times and which always takes the right or best step.*

Also the key to nondeterministic algorithms besides allowing more than one step from a set input is they allow for more than one acceptable output or one acceptable output with multiple paths to reach it.

We can represent the algorithm as a finite state machine. Any problem can be represented as a finite state machine and the solution (solutions) as paths through that machine.

Figure 1(a) shows a deterministic and 1(b) a nondeterministic FSM defined by the following:

FSM = (S,I,f,s_{o},F)

where:

- S is a finite set of states
- I is the finite set of inputs (in this case 1,0)
- s
_{o}is the initial state or set of states - F is a subset of S which represent the final (accepted) states
- f is the transition function

The differences between the state machines are the initial state (for NFSM there can be a set of initial states) and the transition function. The f_{d} (deterministic transition function) assigns a single state for each unique input-state pair where f_{n} (nondeterministic transition function) can assign a set of states for each input state pair. Table 1 is a state table comparing the next states from the input-state pairs for each type of state machine.

The final or accepted state is determined not just by the input but the state the state is in at a current time.

Some problems are inherently one or the other. Any mathematical function is deterministic, of course. There may be several routes from your point of origin to your final destination when booking a flight. By definition, NP complete problems are nondeterministic problems that cannot be solved in polynomial time. A game would be pretty boring if it was deterministic. All those possible paths to victory is what makes you want to play. Nondeterminism determines the replay value of a game. Right now we are playing Way of the Samurai 3, Yakuza 3, and Fallout 3. These games are the ultimate exercise in nondeterminism. In these games, using the same character, there are many paths through the game but there are only a few outcomes. I can't tell you how many ways your character will still end up blowing up the city of Megaton in Fallout. That's the replay value.

But there are gray areas. If we have an algorithm that utilizes a random number generator, then we would not be able to predict the output even with the same input. Here, there are random states changes causing different results. Would it be considered deterministic or nondeterministic? Also nondeterministic algorithms can be converted to deterministic algorithms.

So going back to our discussion with our colleague about his wacky code, restating the original situation:

If you have two threads executing the same code currently on separate cores using the same input but producing different results, is that code nondeterministic?

The whole scenario is that it was exhibiting the expected results with sequential execution and also when threads were added running on a single core machine. But when executed on a multicore machine, the code produced unstable results.

Can deterministic code mysteriously convert to nondeterministic code? Something had the potential to cause the program to have unstable results but was not revealed until the code ran on multiple cores and affected the output. There could be something external to the code (maybe another thread) that changes some value shared by that code that was unexpected. The unexpected modification did not take place when the threads executed on a single core (due to timing) but when executing on separate cores the modification affected the output. There can be other timing issues when code is run on multiple cores. Even (faster or slower) processors can cause values to be written to memory or not written to memory unexpectedly. The speed of I/O devices, global variables should also be looked at. This can cause a deterministic FSM to require additional states and therefore new paths through that algorithm converting the deterministic code to nondeterministic code.