Correctness and Performance of Parallel Code
Once threading has been added to an application, the developer is potentially faced with a new set of programming bugs. Many of these are difficult to detect and require extra time and care to ensure a correctly running program. A few of the more common threading issues will be covered in this article, including:
- Data Race
- Thread Stall
A data race occurs when two or more threads are trying to access the same resource at the same time. If the threads are not communicating effectively, it is impossible to know which thread will access the resource first. This leads to inconsistent results in the running program. For example, in a read/write data race, one thread is attempting to write to a variable at the same time another thread is trying to read the variable. The thread that is reading the variable will get a different result depending on whether the write has already occurred. The tricky thing about a data race is that it is non-deterministic. A program could run correctly 100 times in a row, but when moved onto the customer's system, which has slightly different system properties, the threads do not line up as they did on the test system and the program fails.
The way to correct a data race is with synchronization. One way to synchronize access to a common resource is through a critical section. Placing a critical section around a block of code alerts the threads that only one may enter that block of code at a time. This ensures that threads will access that resource in an organized fashion. Synchronization is a necessary and useful technique, but care should be taken to limit unnecessary synchronization as it will slow down performance of the application. Since only one thread is allowed to access a critical section at a time, any other threads needing to access that section are forced to wait. This means precious resources are sitting idle, negatively impacting performance.
Another method of ensuring shared resources are correctly accessed is through a lock. In this case, a thread will lock a specific resource while it is using that resource, which also denies access to other threads. Two common threading errors can occur when using locks. The first is a thread stall. This happens when you have one thread that has locked a certain resource and then moves on to other work in the program without first releasing the lock. When a second thread tries to access that resource it is forced to wait for an infinite amount of time, causing a stall. A developer should ensure that threads release their locks before continuing through the program. A deadlock is similar to a stall, but occurs when using a locking hierarchy. For example, if Thread 1 locks variable A and then wants to lock variable B while Thread 2 is simultaneously locking variable B and then trying to lock variable A, the threads are going to deadlock. Both are trying to access a variable that the other has locked. In general, you should avoid complex locking hierarchies, if possible, and ensure that locks are acquired and released in the same order.
Intel Parallel Inspector
Debugging threaded programs may seem to be a large burden, but it can be easier for developers using the Intel Parallel Inspector. This tool detects threading errors while your program is running. It then displays the errors and correlates them to the offending lines of source code. One of the great features of the Intel Parallel Inspector is that an error does not have to occur in order for it to be detected. For example, as mentioned earlier, data races are non-deterministic making them very difficult to detect. Intel Parallel Inspector will pinpoint where a data race can possibly occur even if the code happened to execute correctly while the tool was examining it. Key to effectively using Intel Parallel Inspector is ensuring good code coverage when running the program. Intel Parallel Inspector cannot detect an error if the region of code containing the error is never executed, so it is important to make sure all functions in the program are exercised.
A second feature of Intel Parallel Studio that assists multithreaded debug is the Intel Parallel Debugger Extension. This extension adds re-entrant call detection and the ability to serialize parallel regions. Re-entrant call detection finds concurrent calls to the same function providing an indication to the programmer that the function should be inspected for thread safety. Serialize parallel regions enables the programmer to dynamically change the execution of an OpenMP program so that parallel regions are executed serially, e.g., by one thread. Dynamic control of the execution of parallel regions assists in isolating concurrency bugs.
Intel Parallel Amplifier
Once correctness issues are solved, performance tuning can occur. Intel Parallel Amplifier is a tool that leverages instrumentation technology to aid in the tuning of applications threaded using OpenMP, Windows API, or POSIX threads. The tool lets you visually inspect the performance of your threads to answer questions such as:
- Is the work evenly distributed between threads?
- How much of the program is running in parallel?
- How does performance increase as the number of processors employed increases?
- What is the impact of synchronization between threads on execution time?
The answer to these questions can help you optimize your application further. For example, if you determined that the workload was not balanced evenly between threads, you could implement code changes and iteratively test the application until you have confirmed a balance. If synchronization time was observed to be excessive, you could analyze your code to see how to simplify or safely remove some of the synchronization. Techniques for doing so are outside of the scope of this article; the main point is that Intel Parallel Amplifier is the tool that lets you monitor the effects of your optimization as you tune.
Multicore processors offer performance headroom to applications. To take advantage of these performance gains, increasing the parallelism of application software is recommended. The best way to extract the full potential out of a multicore processor is through threading. The software tools that Intel has created can ease this transition, helping to ensure that your application can be optimally tuned for the hardware that powers it.
For More Information