In the previous article in this two-part series on Microsoft C++ AMP with Visual Studio 2013, I explained usage details for CPU and GPU debugging capabilities in Visual Studio 2013. In this article, I explore more-advanced debugging features, focusing on complex GPU code that uses tiling optimizations.
Debugging C++ AMP Code with Tiling Optimizations
Tiling is a common optimization technique when coding GPU kernels (that is, functions sent from the CPU for execution on the GPU). In fact, if you have experience with OpenCL or CUDA, you will be accustomed to working with tiling optimization. C++ AMP also allows you to explicitly tile the computation to take advantage of the GPU memory hierarchy with the tile_static
storage class to access tile_static
memory. Thus, you can partition data into smaller subsets as you would in optimized kernels with either OpenCL or CUDA.
Correct use of tiling optimization in appropriate algorithms can boost performance compared with baseline execution without tiling. However, as happens with any optimization technique, the code is usually more difficult to understand and there are new challenges to be met when debugging the optimized code. Fortunately, the C++ AMP GPU debugging features included in Visual Studio 2013 make it easier for you to understand code with tiling optimization and its execution steps.
The following lines show an example of a C++ AMP application that performs a matrix multiplication with a 256-tile size (16x16). Thus, the code reuses some pieces of data 256 times from very fast memory instead of going to search for these data pieces in the slower global memory. The code is a different version of the typical tiled matrix multiplication Daniel Moth uses as an example in his Microsoft presentations. I've added detailed comments to the code in order to make it easier to understand each block.
#include "stdafx.h" #include <amp.h> #include <iostream> using namespace concurrency; using std::vector; static const int TILESIZE = 16; void amp_matrix_tiled( vector<int>& vector_result, const vector<int>& vector_a, const vector<int>& vector_b, int M, int N, int W) { // Create the C++ AMP objects that will make the necessary transfers from CPU to GPU array_view<const int, 2> a(M, W, vector_a), b(W, N, vector_b); array_view<int, 2> result(M, N, vector_result); // Use the discard_data optimization hint to tell the runtime // to avoid copying the current contents. // of the view to a target accelerator_view because the existing content is not needed. result.discard_data(); parallel_for_each( // Define the compute domain, i.e., the set of threads that are created. // Produce a tiled_extend object (16-by-16). result.extent.tile<TILESIZE, TILESIZE>(), [=](tiled_index<TILESIZE, TILESIZE> tiled_idx) restrict(amp) { // Retrieve the row number int row = tiled_idx.local[0]; // Retrieve the column number int col = tiled_idx.local[1]; tile_static int tile_static_a[TILESIZE][TILESIZE]; tile_static int tile_static_b[TILESIZE][TILESIZE]; int sum = 0; for (int i = 0; i < a.extent[1]; i += TILESIZE) { tile_static_a[row][col] = a(tiled_idx.global[0], col + i); tile_static_b[row][col] = b(row + i, tiled_idx.global[1]); // Blocks the execution of all threads in a tile // until all threads in the tile have reached this call. // Ensures that memory accesses are visible to other threads // in the thread tile, and are executed according to program order. tiled_idx.barrier.wait(); for (int j = 0; j < TILESIZE; j++) { sum += tile_static_a[row][j] * tile_static_b[j][col]; } // Blocks the execution of all threads in a tile // until all all threads in the tile have reached this call. tiled_idx.barrier.wait(); } result[tiled_idx.global] = sum; }); // Synchronizes any modifications made to the sum array_view to its source data. result.synchronize(); } int _tmain(int argc, _TCHAR* argv[]) { const int M = 1024; const int N = 1024; const int W = 1024; // Vector vector_a => M*W vector<int> vector_a(M*W); // Vector vector_b => W*N vector<int> vector_b(W*N); // Vector vector_result => M*N vector<int> vector_result(M*N); // Populate vector_a for (unsigned int i = 0; i < vector_a.size(); i++) { vector_a[i] = i % 10; } // populate vector B for (unsigned int j = 0; j < vector_b.size(); j++) { vector_b[j] = j % 10; } amp_matrix_tiled(vector_result, vector_a, vector_b, M, N, W); // Print the results. for (unsigned int k = 0; k < vector_result.size(); k++) { std::cout << vector_result[k] << "\n"; } std::cin.get(); return 0; }
The number of GPU threads that the kernel is going to launch makes it challenging for a debugging session. Here, I'm focusing on GPU code, so you have to make sure that your project has the necessary configuration changes (explained in Part 1 of this series) to debug GPU code. If you establish a breakpoint at the line "sum += tile_static_a[row][j] * tile_static_b[j][col]"
in the amp_matrix_tiled
function, then execute the application until the debugger reaches this breakpoint, you will be able to see the 256 launched GPU threads.
Select Debug | Windows | GPU Threads to activate the GPU Threads window (see Figure 1). Remember that the GPU software emulator allows you to work with four threads, so you will see "4 threads" in the Thread Count column for the amp_matrix_tiled::I7<lambda>
Location
, and other 252 threads in the same location but in another row (252 + 4 = 256). The GPU Threads window will display the total number of threads: 256.
Select Debug | Windows | Parallel Watch | Parallel Watch 1 and you will see the list of the 256 GPU threads with their Tile
and Thread
indexes. The Thread
coordinates will go from [0, 0]
to [15, 15]
. Notice that the Tile
coordinates are always [0, 0]
, which means that the 256 threads have been launched for the first tile.
Figure 1: The GPU Threads window displaying information about the 256 GPU threads launched by amp_matrix_tiled
.