Channels ▼
RSS

Tools

Atomic Operations and Low-Wait Algorithms in CUDA


The NVIDIA Kepler architecture significantly improved the ability of threads to communicate outside a threadblock via atomic operations. Atomic operations essentially lock a memory location until they complete. Used correctly, atomic operations can help implement a wide range of generic data structures and algorithms in the massively threaded GPU programming environment.This tutorial demonstrates how to implement a massively parallel, low-wait parallel counter. Benchmark results show that the provided code is 8x faster on Kepler GPUs and 40x faster on Fermi hardware compared with traditional counters that use atomicAdd() to increment a single memory location. Unlike traditional atomic counters, the massively parallel counter implemented in the ParallelCounter class below is not susceptible to performance degradation from pathological usage, such as having every thread increment the counter at the same time.

Atomic Operations Are Great, But Don't Use Them

To understand what a low-wait massively parallel counter is, it is necessary to first understand the benefits and challenges of using atomic operations in global memory.

Atomic operations's chief feature is locking the affected memory location until the operation is complete. Calling atomicAdd(&foo, 1), for example, means that only the thread that receives the lock can increment variable foo by one. All other threads that wish to read or write foo must wait until the lock is removed. It is necessary to utilize an atomic operation to update a memory location in global memory that might be used by other threads. While the C/C++ construct foo++ looks like a single operation, in reality, the hardware might carrry out three separate steps when performing the increment: (1) fetch foo into a register, (2) increment the register by one, and (3) write the register back to foo in global memory. Without a lock, two or more parallel threads might simultaneously read foo into a register at the same time, which means they would be unaware of the increment in progress by the other threads. While the end result of a write by multiple threads to the same location in global memory is undefined, it is likely that the variable foo will reflect an incorrect number of increment operations and be corrupted.

Atomic operations in a parallel environment present a real challenge because they serialize execution. Instead of seeing an nProcessor parallel speedup or O(nThreads/nProcessor), applications that perform an atomic operation on a single counter will only exhibit a sequential runtime of O(nThreads). In other words, incrementing a single counter with atomicAdd() means that the counter has to be locked, thus forcing all the parallel threads to stop and wait so they can individually perform the increment operation — one after the other. In other words, it's the antithesis of parallel programming.

A low-wait algorithm is an algorithm that still uses locking (atomic operations), so there will be some serialization, but the algorithm is designed to keep the number of threads that must wait for the lock to be released to a minimum. In other words, the algorithm attempts to keep as many parallel threads active as possible.

An NVIDIA SDK histogram example demonstrated a form of low-wait algorithm via the use of a vector of counters that are incremented with atomicAdd() operations. Each element in the vector contains the count for a single bin in the histogram. For uniformly distributed data, the SDK example will keep a number of threads equivalent to the number of active bins. When this number is large, the SDK histogram will demonstrate high performance because many threads will be actively incrementing histogram counts. Performance suffers when the data is not uniformly distributed, causing many of the items fall into a few bins. A pathological case occurs when all the histogram data fits into a single bin.

From a hardware point of view, implementing high-performance atomic operations is difficult because of all the complexity required to enforce a lock and to preserve coherency in a massively parallel environment. Kudos to the NVIDIA engineers who have made atomic operations so much faster on the new Kepler GPUs. Even though atomic operations can now approach the speed of global memory, using a lock in a parallel algorithm will likely have dramatic performance implications that must be avoided.

A Parallel Counter Class

The following implementation of a parallel counter C++ class spreads the value of the counter across a vector of several counters. Atomic increment (or decrement) operations are forced to be uniformly distributed across the count vector through a modulus operation (% in C/C++) on the CUDA variable threadIdx.x. As discussed previously, the uniformity of access means that on-average at least N_ATOMIC threads will always be active, which means the parallel performance should degrade gracefully for even the most extreme pathological case where all the threads on the GPU are incrementing the same counter at the same time.

Even on fast GPUs, the modulus operation is expensive, so it is highly recommended that N_ATOMIC be a power-of-two because the compiler can convert the expression (threadIdx.x % N_ATOMIC) to (threadIdx.x & (N_ATOMIC-1)). Boolean AND operations are fast relative to the modulus operation.  To make best use of all the threads in a warp, it is also recommended that N_ATOMIC be a multiple of the warp size.

template <uint32_t N_ATOMIC=32>
struct ParallelCounter {
public:
  uint32_t count[N_ATOMIC];

  inline __device__ uint32_t operator-=(uint32_t x) {
    return atomicSub(count + (threadIdx.x % N_ATOMIC), x); 
  }
  inline __device__ uint32_t operator+=(uint32_t x) {
    return atomicAdd(count + (threadIdx.x % N_ATOMIC), x); 
  }
  // spread the counts across the counter
  __device__ __host__ void set(uint32_t x) {
    for(int i=0; i < N_ATOMIC; i++) count[i]=x/N_ATOMIC;
    for(int i=0; i < x % N_ATOMIC; i++) count[i] += 1;
  }
  inline __device__ __host__ uint32_t getCount() {
    // simplest slow method for right now.
    uint32_t sum=0;
    for(int i=0; i < N_ATOMIC; i++) {
      sum += count[i];
    }
    return sum;
  }
};

Listing One: An initial definition of the ParallelCounter class.

The getCount() method is defined using the __device__ and __host__ qualifiers, which means it can be called by either the host or the CUDA device. For simplicity, it is assumed that device-side calls to getCount() are performed by a single CUDA thread and only after all atomic updates are complete. Similarly, the set() method and constructor/destructors are qualified so they can run on either the host or device.

Testing Usability

The following firstCounter.cu example code demonstrates three possible ways to use a single ParallelCounter class:

  • Utilize a C++ object entirely on the GPU.
  • Utilize an object on both the host and GPU
  • Map an object into Unified Virtual Addressing (UVA) for use by both devices.

For convenience, the source in Listing One includes the ParallelCounter structure definition to make copying the code to a file easy. In addition, a test for structure compatibility with the GPU is performed using the __is_pod() method.  POD_struct compatibility is also highlighted in the definition "struct ParallelCounter" rather than "class ParallelCounter" Instead of instrumenting this code, kernel execution times reported by the NVIDIA nvprof text profiler will be used to compare performance in the next section.

#include <iostream>
using namespace std;
#include <cstdio>
#include <cassert>

///////////////////////////////////////////////////////////////////////////////
#include <stdint.h>

template <uint32_t N_ATOMIC=32>
struct ParallelCounter {
public:
  uint32_t count[N_ATOMIC];

  inline __device__ uint32_t operator-=(uint32_t x) {
    return atomicSub(count + (threadIdx.x % N_ATOMIC), x); 
  }
  inline __device__ uint32_t operator+=(uint32_t x) {
    return atomicAdd(count + (threadIdx.x % N_ATOMIC), x); 
  }
  // spread the counts across the counter
  __device__ __host__ void set(uint32_t x) {
    for(int i=0; i < N_ATOMIC; i++) count[i]=x/N_ATOMIC;
    for(int i=0; i < x % N_ATOMIC; i++) count[i] += 1;
  }
  inline __device__ __host__ uint32_t getCount() {
    // simplest slow method for right now.
    uint32_t sum=0;
    for(int i=0; i < N_ATOMIC; i++) {
      sum += count[i];
    }
    return sum;
  }
};
//////////////////////////////////////////////////////////////////////////////////
// provide a compile-time specified default size for ParallelCounter
#ifndef SPREAD
#define SPREAD 32
#endif

//*********************************************************************
// Some global functions for testing
__device__ ParallelCounter<SPREAD> myCounter;
__global__ void initCounter() {
  int tid = threadIdx.x + blockIdx.x * blockDim.x;

  if(tid == 0)
    myCounter.set(0);
}

__global__ void doCounter(uint32_t nSamples) {
  uint32_t tid = threadIdx.x + blockIdx.x * blockDim.x;

  while(tid < nSamples) {
    myCounter += 1;
    tid += blockDim.x * gridDim.x;
  }
}
__global__ void finiCounter(uint32_t *result) {
  *result = myCounter.getCount();
}

__global__ void initmappedCounter(ParallelCounter<SPREAD> *ptCntr) {
  int tid = threadIdx.x + blockIdx.x * blockDim.x;

  if(tid == 0)
    ptCntr->set(0);
}
__global__ void doPassedCounter(ParallelCounter<SPREAD> *ptCntr, uint32_t nSamples) {
  uint32_t tid = threadIdx.x + blockIdx.x * blockDim.x;

  while(tid < nSamples) {
    *ptCntr += 1;
    tid += blockDim.x * gridDim.x;
  }
}
__global__ void doMappedCounter(ParallelCounter<SPREAD> *ptCntr, uint32_t nSamples) {
  uint32_t tid = threadIdx.x + blockIdx.x * blockDim.x;

  while(tid < nSamples) {
    *ptCntr += 1;
    tid += blockDim.x * gridDim.x;
  }
}

//*********************************************************************
// Start of main()
int main(int argc, char *argv[])
{
  uint32_t nSamples;
  int device=0;
  
  try {
    if(argc < 3) {
      cerr << "Use: nSamples device" << endl;
      return -1;
    }
    
    nSamples=atoi(argv[1]);
    device=atoi(argv[2]);

    cudaSetDevice(device);
    if(cudaPeekAtLastError() != cudaSuccess) throw "failed to set device";

    uint32_t *result;
    cudaMalloc(&result, sizeof(uint32_t));
    if(cudaPeekAtLastError() != cudaSuccess) throw "cudaMalloc failed";
    
    int nThreadsPerBlock=256;
    int nBlocks = nSamples/nThreadsPerBlock + ((nSamples % nThreadsPerBlock)?1:0);
    if(nBlocks > 65535) nBlocks=65535;
    
    cout << "device " << device << " nSamples " << nSamples
        << " spread " << SPREAD << " nBlocks " << nBlocks
        << " threadsPerBlock " << nThreadsPerBlock << endl;
    cout << "Checking if ParallelCounter is a POD: " 
        << (__is_pod(ParallelCounter<SPREAD>)?"TRUE":"FALSE") << endl;
    assert (__is_pod(ParallelCounter<SPREAD>));
    
    { // test 1: Instantiate an object on the GPU
      uint32_t check;
      // pass a copy of the object to the GPU
      initCounter<<<1,1>>>();
      doCounter<<<nBlocks, nThreadsPerBlock>>>(nSamples);
      finiCounter<<<1,1>>>(result);
      cudaMemcpy(&check, result, sizeof(uint32_t), cudaMemcpyDeviceToHost);

      if(cudaPeekAtLastError() != cudaSuccess) throw "failed test1";
      cerr << nSamples << " " << check << endl;
      assert(check == nSamples);
    }

    { // test 2: Instantiate an object on the CPU and copy it to the GPU
      ParallelCounter<SPREAD> foo, *d_foo;
      foo.set(0);
      cudaMalloc(&d_foo, sizeof(foo));
      cudaMemcpy(d_foo, &foo, sizeof(foo), cudaMemcpyDefault);
      if(cudaPeekAtLastError() != cudaSuccess) throw "failed memcpy or cudaMalloc";
      
      doPassedCounter<<<nBlocks, nThreadsPerBlock>>>(d_foo,nSamples);
      if(cudaPeekAtLastError() != cudaSuccess) throw "failed doPassedCounter";

      cudaMemcpy(&foo, d_foo, sizeof(foo), cudaMemcpyDefault);
      if(cudaPeekAtLastError() != cudaSuccess) throw "failed memcpy";

      uint32_t check = foo.getCount(); // read result on host
      cerr << nSamples << " " << check << endl;
      cudaFree(d_foo);
      if(cudaPeekAtLastError() != cudaSuccess) throw "failed free";
      assert(check == nSamples);
    }

#ifdef USE_MAPPED
    { // test 3: use mapped memory (simpler and cleaner but slow!)
      ParallelCounter<SPREAD> *ptCntr;
      // allocate in mapped memory
      cudaMallocHost(&ptCntr, sizeof(ParallelCounter<SPREAD>)); 
      if(cudaPeekAtLastError() != cudaSuccess) throw "cudaHostMalloc failed";

      ptCntr->set(0); // initialize on the host
      doMappedCounter<<<nBlocks, nThreadsPerBlock>>>(ptCntr,nSamples);
      cudaDeviceSynchronize();

      uint32_t check = ptCntr->getCount(); // read result on host
      cerr << nSamples << " " << check << endl;
      cudaFreeHost(ptCntr);
      if(cudaPeekAtLastError() != cudaSuccess) throw "failed cudaFreeHost";

      assert(check == nSamples);
    }
#endif
    
    cout << "***** Passed all sanity checks! *****" << endl;
  } catch (const char * e) {
    cerr << "caught exception: " << e << endl;
    cudaError err;
    if((err=cudaGetLastError()) != cudaSuccess)
      cerr << "CUDA reports: " << cudaGetErrorString(err) << endl;
    return -1;
  }
  return 0;
}

Listing Two: The complete source code for firstCounter.cu.

The counter requires two command-line arguments:

  • The number of times the counter will be incremented. For consistency with the histogram example, this number is referred to as nSamples.
  • The number of the CUDA device to use, which makes it easy to compare Fermi and Kepler GPU performance in mixed GPU systems.

Walking through the code starting at main() shows that this code uses C++ exceptions to catch errors. Currently, GPU kernels and CUDA library functions do not throw exceptions on errors, which is why this example uses cudaPeekAtLastError() to determine whether an error needs to be thrown. The cudaPeekAtLastError() method does not clear the error so cudaGetLastError()  can be used to retrieve the error for printing with cudaGetErrorString().

Next, the result variable is allocated on the device. This variable is used to return the value of the parallel counter for the on-device test.

For convenience, the application prints out information about the runtime configuration. In particular, note that the C preprocessor variable SPREAD can be defined at compile time to test the impact of distributing the atomic operations across various sizes of the internal ParallelCounter count vector.

Test 1: Utilize The Counter Entirely on the GPU

Listing Three simply initializes the ParallelClass myCounter object on the GPU to zero. The counter is then incremented nSamples times in parallel on the device and the count returned in the result variable. The call to assert() checks that the counter did indeed work correctly.

{ // test 1: Instantiate an object on the GPU
      uint32_t check;
      initCounter<<<1,1>>>();
      doCounter<<<nBlocks, nThreadsPerBlock>>>(nSamples);
      finiCounter<<<1,1>>>(result);
      cudaMemcpy(&check, result, sizeof(uint32_t), cudaMemcpyDeviceToHost);       

      if(cudaPeekAtLastError() != cudaSuccess) throw "failed test1";
      cerr << nSamples << " " << check << endl;
      assert(check == nSamples);
}

Listing Three: Utilize the C++ object entirely on the GPU.

The initCounter() kernel zeros out the myCounter object with a call to the set() method. Because the C++ code does not have control over the execution configuration, a check is made to ensure that only a single thread calls the set() method.

The doCounter() kernel increments the counter nSamples times by having each thread use the += operator. Finally, the finiCounter() kernel calls the getCount() method on the GPU so the state of the counter can be returned in result.

Test 2: Porting C++ Objects

The second test instantiates an object foo on the host that is copied to the variable d_foo on the GPU with cudaMemcpy().

Portable C++ object size and layout compatibility is affected by numerous issues including features of the C++ language, such as virtual functions and inheritance, decisions made by the compiler authors, as well as hardware issues such as type size and alignment.

POD_structs (or Plain Old Data structs) are the only types of objects guaranteed by the C++ standard to hold the same value when the contents of the object are copied into an array of char or unsigned char with memcpy(), and then copied back into the object with memcpy(). A POD_struct is essentially a C struct. This compatibility means that a POD_struct can be copied to the remote device and utilized. It is also useful for transferring the C++ object read() and write() operations.

The stackoverflow.com post, "What are Aggregates and PODs and How/Why Are They Special?" provides the following code snippet that makes the memcpy() capability of POD_structs clear. That article provides a more detailed discussion about POD_structs, including limitations imposed by the C++ standard about what types of objects can be considered POD_structs.

#define N sizeof(T)
char buf[N];
T obj; // obj initialized to its original value
memcpy(buf, &obj, N); // between these two calls to memcpy,
// obj might be modified
memcpy(&obj, buf, N); // at this point, each subobject of obj of scalar type
// holds its original value

Listing Four: POD_struct memcpy() compatibility from Stackoverflow post.

Substituting cudaMemcpy() for memcpy() illustrates how POD_structs can be utilized by both the host and GPU.

NVIDIA notes they make every effort to ensure that GPU and CPU objects have the same size and layout. In particular, the NVIDIA CUDA C Programming Guide states:

"On Windows, the CUDA compiler may produce a different memory layout, compared to the host Microsoft compiler, for a C++ object of class type T that satisfies any of the following conditions:

  • T has virtual functions or derives from a direct or indirect base class that has virtual functions;
  • T has a direct or indirect virtual base class;
  • T has multiple inheritance with more than one direct or indirect empty base class."

Bottom line: C++ object compatibility between devices — for both mapped memory and memory explicitly copied with cudaMemcpy() — requires careful attention to the limitations imposed by the C++ standard. Use of the __is_pod() method is recommended to check for PODstruct compatibility.


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 

Video