Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Embedded Systems

CUDA, Supercomputing for the Masses: Part 1


As a scientist at Los Alamos National Laboratory in the 1980s, I had the pleasure of working with the massively parallel 65,536 processor Thinking Machines supercomputers. CUDA has proved to be a natural framework to again start working in a modern massively-parallel (i.e., highly-threaded) environment. Performance is clearly there. One of my production codes, now written in CUDA and running on NVIDIA GPUs, shows both linear scaling and a nearly two orders of magnitude speed increase over a 2.6-Ghz quad-core Opteron system.

CUDA-enabled graphics processors operate as co-processors within the host computer. This means that each GPU is considered to have its own memory and processing elements that are separate from the host computer. To perform useful work, data must be transferred between the memory space of the host computer and CUDA device(s). For this reason, performance results must include IO time to be informative. Colleagues have also referred to these as "Honest Flops" because they more accurately reflect the performance applications will deliver in production.

I claim that a one or two orders of magnitude performance increase over existing technology is a disruptive change that can dramatically alter some aspects of computing. For example, computational tasks that previously would have taken a year can now complete in a few days, hour long computations suddenly become interactive because they be completed in seconds with the new technology, and previously intractable real-time processing tasks now becomes tractable. Finally, lucrative opportunities can present themselves for consultants and engineers with the right skill set and capabilities to write highly-threaded (or massively parallel) software. What about you? How can this type of computing capability benefit your career, applications or real-time processing needs?

Getting started costs nothing and is as easy as downloading CUDA from the CUDA Zone homepage (look for "Get CUDA"). After that, follow the installation instructions for your particular operating system. You don't even need a graphics processor because you can start working right away by using the software emulator to run on your current laptop or workstation. Of course, much better performance will be achieved by running with a CUDA-enabled GPU. Perhaps your computer already has one. Check out the "CUDA-enabled GPUs" link on the CUDA Zone homepage to see. (A CUDA-enabled GPU includes shared on-chip memory and thread management.)

If purchasing a new graphics processor card, I suggest following this article series because I will discuss how various hardware characteristics (such as memory bandwidth, number of registers, atomic operations, and so on) will affect application performance, which will assist you in selecting the appropriate hardware for your application. Also, the CUDA Zone forums provide a wealth of information on all things CUDA, including discussions about what hardware to purchase.

Once installed, the CUDA Toolkit provides a reasonable set of tools for C language application development. This includes:

  • The nvcc C compiler
  • CUDA FFT and BLAS libraries for the GPU
  • A profiler
  • An alpha version (as of March 2008) of the gdb debugger for the GPU
  • CUDA runtime driver (now also available in the standard NVIDIA GPU driver)
  • CUDA programming manual

The nvcc C compiler does most of the work in converting C code into an executable that will run on a GPU or the emulator. Happily, assembly-language programming is not required to achieve high performance. Future articles will discuss working with CUDA from other high-level languages including C++, FORTRAN, and Python. I assume that you're familiar with C/C++. No previous parallel programming or CUDA experience is required. This is consistent with the existing CUDA documentation.

Creating and running a CUDA C language program follows the same workflow as other C programming environments. Explicit build and run instructions for Windows and Linux environments are in the CUDA documentation, but simply stated they are:

  1. Create or edit the CUDA program with your favorite editor. Note: CUDA C language programs have the suffix ".cu".
  2. Compile the program with nvcc to create the executable. (NVIDIA provides sane makefiles with the examples. Generally all you need to type is "make" to build for a CUDA device or "make emu=1" to build for the emulator.)
  3. Run the executable.

Listing One is a simple CUDA program to get you started. It is is nothing more than a program that calls the CUDA API to move data to and from the CUDA device. Nothing new is added that might cause confusion in learning how to use the tools to build and run a CUDA program. In the next article, I will discuss what is going on and start using the CUDA device to perform some work.

Listing One

// moveArrays.cu
//
// demonstrates CUDA interface to data allocation on device (GPU)
// and data movement between host (CPU) and device.

#include <stdio.h>
#include <assert.h>
#include <cuda.h>
int main(void)
{
   float *a_h, *b_h;     // pointers to host memory
   float *a_d, *b_d;     // pointers to device memory
   int N = 14;
   int i;
   // allocate arrays on host
   a_h = (float *)malloc(sizeof(float)*N);
   b_h = (float *)malloc(sizeof(float)*N);
   // allocate arrays on device
   cudaMalloc((void **) &a_d, sizeof(float)*N);
   cudaMalloc((void **) &b_d, sizeof(float)*N);
   // initialize host data
   for (i=0; i<N; i++) {
      a_h[i] = 10.f+i;
      b_h[i] = 0.f;
   }
   // send data from host to device: a_h to a_d 
   cudaMemcpy(a_d, a_h, sizeof(float)*N, cudaMemcpyHostToDevice);
   // copy data within device: a_d to b_d
   cudaMemcpy(b_d, a_d, sizeof(float)*N, cudaMemcpyDeviceToDevice);
   // retrieve data from device: b_d to b_h
   cudaMemcpy(b_h, b_d, sizeof(float)*N, cudaMemcpyDeviceToHost);
   // check result
   for (i=0; i<N; i++)
      assert(a_h[i] == b_h[i]);
   // cleanup
   free(a_h); free(b_h); 
   cudaFree(a_d); cudaFree(b_d);

Give it a try and play around with the development tools. A quick note to newbies: You can use printf statements to see what is happening on the GPU when running under the emulator (build the executable with make emu=1). Also, feel free to try out the alpha version of the debugger.


Rob Farber is a senior scientist at Pacific Northwest National Laboratory. He has worked in massively parallel computing at several national laboratories and as co-founder of several startups. He can be reached at [email protected].

For More Information


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.