SlideShare a Scribd company logo
CUDA C/C++ BASICS
Procesamiento de datos
A Gran Escala- MUCD
512 Scalar Processor (SP) cores execute parallel
thread instructions
16 Streaming Multiprocessors (SMs)
each contains
32 scalar processors
32 fp32 / int32 ops / clock,
16 fp64 ops / clock
4 Special Function Units (SFUs)
Shared register file (128KB)
48 KB / 16 KB Shared memory
16KB / 48 KB L1 data cache
GPU : 20-Series Architecture
Software Hardware
Threads are executed by scalar processors
Thread
Scalar
Processor
Thread
Block Multiprocessor
Thread blocks are executed on multiprocessors
Thread blocks do not migrate
Several concurrent thread blocks can reside on one
multiprocessor - limited by multiprocessor
resources (shared memory and register file)
...
Grid Device
A kernel is launched as a grid of thread blocks
Execution Model
Thread
Block Multiprocessor
32 Threads
32 Threads
32 Threads
...
Warps
A thread block consists of
32-thread warps
A warp is executed
physically in parallel
(SIMD) on a multiprocessor
=
Warps
Host
CPU
Chipset
DRAM
Device
DRAM
Global
Constant
Texture
Local
GPU
Multiprocessor
Registers
Shared Memory
Multiprocessor
Registers
Shared Memory
Multiprocessor
Registers
Shared Memory
Constant and Texture
Caches
L1 / L2 Cache
Memory Architecture: Tesla T-20 Series
What is CUDA?
CUDA Architecture
Expose GPU parallelism for general-purpose computing
Retain performance
CUDA C/C++
Based on industry-standard C/C++
Small set of extensions to enable heterogeneous programming
Straightforward APIs to manage devices, memory etc.
This session introduces CUDA C/C++
Heterogeneous Computing
Blocks
Threads
Indexing
Shared memory
__syncthreads()
Asynchronous operation
Handling errors
Managing devices
CONCEPTS
HELLO WORLD!
Heterogeneous Computing
Blocks
Threads
Indexing
Shared memory
__syncthreads()
Asynchronous operation
Handling errors
Managing devices
CONCEPTS
Heterogeneous Computing
§ Terminology:
§ Host The CPU and its memory (host memory)
§ Device The GPU and its memory (device memory)
Host Device
Heterogeneous Computing
#include <iostream>
#include <algorithm>
using namespace std;
#define N 1024
#define RADIUS 3
#define BLOCK_SIZE 16
__global__ void stencil_1d(int *in, int *out) {
__shared__ int temp[BLOCK_SIZE + 2 * RADIUS];
int gindex = threadIdx.x + blockIdx.x * blockDim.x;
int lindex = threadIdx.x + RADIUS;
// Read input elements into shared memory
temp[lindex] = in[gindex];
if (threadIdx.x < RADIUS) {
temp[lindex - RADIUS] = in[gindex - RADIUS];
temp[lindex + BLOCK_SIZE] = in[gindex + BLOCK_SIZE];
}
// Synchronize (ensure all the data is available)
__syncthreads();
// Apply the stencil
int result = 0;
for (int offset = -RADIUS ; offset <= RADIUS ; offset++)
result += temp[lindex + offset];
// Store the result
out[gindex] = result;
}
void fill_ints(int *x, int n) {
fill_n(x, n, 1);
}
int main(void) {
int *in, *out; // host copies of a, b, c
int *d_in, *d_out; // device copies of a, b, c
int size = (N + 2*RADIUS) * sizeof(int);
// Alloc space for host copies and setup values
in = (int *)malloc(size); fill_ints(in, N + 2*RADIUS);
out = (int *)malloc(size); fill_ints(out, N + 2*RADIUS);
// Alloc space for device copies
cudaMalloc((void **)&d_in, size);
cudaMalloc((void **)&d_out, size);
// Copy to device
cudaMemcpy(d_in, in, size, cudaMemcpyHostToDevice);
cudaMemcpy(d_out, out, size, cudaMemcpyHostToDevice);
// Launch stencil_1d() kernel on GPU
stencil_1d<<<N/BLOCK_SIZE,BLOCK_SIZE>>>(d_in + RADIUS, d_out + RADIUS);
// Copy result back to host
cudaMemcpy(out, d_out, size, cudaMemcpyDeviceToHost);
// Cleanup
free(in); free(out);
cudaFree(d_in); cudaFree(d_out);
return 0;
}
serial code
parallel code
serial code
parallel fn
Simple Processing Flow
1. Copy input data from CPU memory to GPU
memory
PCI Bus
Simple Processing Flow
1. Copy input data from CPU memory to GPU
memory
2. Load GPU program and execute,
caching data on chip for performance
PCI Bus
Simple Processing Flow
1. Copy input data from CPU memory to GPU
memory
2. Load GPU program and execute,
caching data on chip for performance
3. Copy results from GPU memory to CPU
memory
PCI Bus
Hello World!
int main(void) {
printf("Hello World!n");
return 0;
}
Standard C that runs on the host
NVIDIA compiler (nvcc) can be used to
compile programs with no device code
Output:
$ nvcc
hello_world.cu
$ a.out
Hello World!
$
Hello World! with Device Code
__global__ void mykernel(void) {
}
int main(void) {
mykernel<<<1,1>>>();
printf("Hello World!n");
return 0;
}
§ Two new syntactic elements…
Hello World! with Device Code
__global__ void mykernel(void) {
}
CUDA C/C++ keyword __global__ indicates a function that:
Runs on the device
Is called from host code
nvcc separates source code into host and device components
Device functions (e.g. mykernel()) processed by NVIDIA compiler
Host functions (e.g. main()) processed by standard host compiler
gcc, cl.exe
Hello World! with Device Code
mykernel<<<1,1>>>();
Triple angle brackets mark a call from host code to device code
Also called a “kernel launch”
We’ll return to the parameters (1,1) in a moment
That’s all that is required to execute a function on the GPU!
Hello World! with Device Code
__global__ void mykernel(void) {
}
int main(void) {
mykernel<<<1,1>>>();
printf("Hello World!n");
return 0;
}
mykernel() does nothing, somewhat
anticlimactic!
Output:
$ nvcc hello.cu
$ a.out
Hello World!
$
Parallel Programming in CUDA C/C++
§ But wait… GPU computing is about massive
parallelism!
§ We need a more interesting example…
§ We’ll start by adding two integers and build up
to vector addition
a b c
Addition on the Device
A simple kernel to add two integers
__global__ void add(int *a, int *b, int *c) {
*c = *a + *b;
}
As before __global__ is a CUDA C/C++ keyword meaning
add() will execute on the device
add() will be called from the host
Addition on the Device
Note that we use pointers for the variables
__global__ void add(int *a, int *b, int *c) {
*c = *a + *b;
}
add() runs on the device, so a, b and c must point to device
memory
We need to allocate memory on the GPU
Memory Management
Host and device memory are separate entities
Device pointers point to GPU memory
May be passed to/from host code
May not be dereferenced in host code
Host pointers point to CPU memory
May be passed to/from device code
May not be dereferenced in device code
Simple CUDA API for handling device memory
cudaMalloc(), cudaFree(), cudaMemcpy()
Similar to the C equivalents malloc(), free(), memcpy()
Addition on the Device: add()
Returning to our add() kernel
__global__ void add(int *a, int *b, int *c) {
*c = *a + *b;
}
Let’s take a look at main()…
Addition on the Device: main()
int main(void) {
int a, b, c; // host copies of a, b, c
int *d_a, *d_b, *d_c; // device copies of a, b, c
int size = sizeof(int);
// Allocate space for device copies of a, b, c
cudaMalloc((void **)&d_a, size);
cudaMalloc((void **)&d_b, size);
cudaMalloc((void **)&d_c, size);
// Setup input values
a = 2;
b = 7;
Addition on the Device: main()
// Copy inputs to device
cudaMemcpy(d_a, &a, size, cudaMemcpyHostToDevice);
cudaMemcpy(d_b, &b, size, cudaMemcpyHostToDevice);
// Launch add() kernel on GPU
add<<<1,1>>>(d_a, d_b, d_c);
// Copy result back to host
cudaMemcpy(&c, d_c, size, cudaMemcpyDeviceToHost);
// Cleanup
cudaFree(d_a); cudaFree(d_b); cudaFree(d_c);
return 0;
}
RUNNING IN PARALLEL
Heterogeneous Computing
Blocks
Threads
Indexing
Shared memory
__syncthreads()
Asynchronous operation
Handling errors
Managing devices
CONCEPTS
Moving to Parallel
GPU computing is about massive parallelism
So how do we run code in parallel on the device?
add<<< 1, 1 >>>();
add<<< N, 1 >>>();
Instead of executing add() once, execute N times in parallel
Vector Addition on the Device
With add() running in parallel we can do vector addition
Terminology: each parallel invocation of add() is referred to as a block
The set of blocks is referred to as a grid
Each invocation can refer to its block index using blockIdx.x
__global__ void add(int *a, int *b, int *c) {
c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x];
}
By using blockIdx.x to index into the array, each block handles a
different index
Vector Addition on the Device
__global__ void add(int *a, int *b, int *c) {
c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x];
}
On the device, each block can execute in parallel:
c[0] = a[0] + b[0]; c[1] = a[1] + b[1]; c[2] = a[2] + b[2]; c[3] = a[3] + b[3];
Block 0 Block 1 Block 2 Block 3
Vector Addition on the Device: add()
Returning to our parallelized add() kernel
__global__ void add(int *a, int *b, int *c) {
c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x];
}
Let’s take a look at main()…
Vector Addition on the Device: main()
#define N 512
int main(void) {
int *a, *b, *c; // host copies of a, b, c
int *d_a, *d_b, *d_c; // device copies of a, b, c
int size = N * sizeof(int);
// Alloc space for device copies of a, b, c
cudaMalloc((void **)&d_a, size);
cudaMalloc((void **)&d_b, size);
cudaMalloc((void **)&d_c, size);
// Alloc space for host copies of a, b, c and setup input values
a = (int *)malloc(size); random_ints(a, N);
b = (int *)malloc(size); random_ints(b, N);
c = (int *)malloc(size);
Vector Addition on the Device: main()
// Copy inputs to device
cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice);
cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice);
// Launch add() kernel on GPU with N blocks
add<<<N,1>>>(d_a, d_b, d_c);
// Copy result back to host
cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost);
// Cleanup
free(a); free(b); free(c);
cudaFree(d_a); cudaFree(d_b); cudaFree(d_c);
return 0;
}
Review (1 of 2)
Difference between host and device
Host CPU
Device GPU
Using __global__ to declare a function as device code
Executes on the device
Called from the host
Passing parameters from host code to a device function
Review (2 of 2)
Basic device memory management
cudaMalloc()
cudaMemcpy()
cudaFree()
Launching parallel kernels
Launch N copies of add() with add<<<N,1>>>(…);
Use blockIdx.x to access block index
INTRODUCING THREADS
Heterogeneous Computing
Blocks
Threads
Indexing
Shared memory
__syncthreads()
Asynchronous operation
Handling errors
Managing devices
CONCEPTS
CUDA Threads
Terminology: a block can be split into parallel threads
Let’s change add() to use parallel threads instead of parallel blocks
We use threadIdx.x instead of blockIdx.x
Need to make one change in main()…
__global__ void add(int *a, int *b, int *c) {
c[threadIdx.x] = a[threadIdx.x] + b[threadIdx.x];
}
Vector Addition Using Threads: main()
#define N 512
int main(void) {
int *a, *b, *c; // host copies of a, b, c
int *d_a, *d_b, *d_c; // device copies of a, b, c
int size = N * sizeof(int);
// Alloc space for device copies of a, b, c
cudaMalloc((void **)&d_a, size);
cudaMalloc((void **)&d_b, size);
cudaMalloc((void **)&d_c, size);
// Alloc space for host copies of a, b, c and setup input values
a = (int *)malloc(size); random_ints(a, N);
b = (int *)malloc(size); random_ints(b, N);
c = (int *)malloc(size);
Vector Addition Using Threads: main()
// Copy inputs to device
cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice);
cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice);
// Launch add() kernel on GPU with N threads
add<<<1,N>>>(d_a, d_b, d_c);
// Copy result back to host
cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost);
// Cleanup
free(a); free(b); free(c);
cudaFree(d_a); cudaFree(d_b); cudaFree(d_c);
return 0;
}
COMBINING THREADS
AND BLOCKS
Heterogeneous Computing
Blocks
Threads
Indexing
Shared memory
__syncthreads()
Asynchronous operation
Handling errors
Managing devices
CONCEPTS
Combining Blocks and Threads
We’ve seen parallel vector addition using:
Many blocks with one thread each
One block with many threads
Let’s adapt vector addition to use both blocks and threads
Why? We’ll come to that…
First let’s discuss data indexing…
0 1 7
2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6
Indexing Arrays with Blocks and Threads
With M threads/block a unique index for each thread is given by:
int index = threadIdx.x + blockIdx.x * M;
No longer as simple as using blockIdx.x and threadIdx.x
Consider indexing an array with one element per thread (8 threads/block)
threadIdx.x threadIdx.x threadIdx.x threadIdx.x
blockIdx.x = 0 blockIdx.x = 1 blockIdx.x = 2 blockIdx.x = 3
Indexing Arrays: Example
Which thread will operate on the red element?
int index = threadIdx.x + blockIdx.x * M;
= 5 + 2 * 8;
= 21;
0 1 7
2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6
threadIdx.x = 5
blockIdx.x = 2
0 1 31
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
M = 8
Vector Addition with Blocks and Threads
What changes need to be made in main()?
Use the built-in variable blockDim.x for threads per block
int index = threadIdx.x + blockIdx.x * blockDim.x;
Combined version of add() to use parallel threads and parallel
blocks
__global__ void add(int *a, int *b, int *c) {
int index = threadIdx.x + blockIdx.x * blockDim.x;
c[index] = a[index] + b[index];
}
Addition with Blocks and Threads: main()
#define N (2048*2048)
#define THREADS_PER_BLOCK 512
int main(void) {
int *a, *b, *c; // host copies of a, b, c
int *d_a, *d_b, *d_c; // device copies of a, b, c
int size = N * sizeof(int);
// Alloc space for device copies of a, b, c
cudaMalloc((void **)&d_a, size);
cudaMalloc((void **)&d_b, size);
cudaMalloc((void **)&d_c, size);
// Alloc space for host copies of a, b, c and setup input values
a = (int *)malloc(size); random_ints(a, N);
b = (int *)malloc(size); random_ints(b, N);
c = (int *)malloc(size);
Addition with Blocks and Threads: main()
// Copy inputs to device
cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice);
cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice);
// Launch add() kernel on GPU
add<<<N/THREADS_PER_BLOCK,THREADS_PER_BLOCK>>>(d_a, d_b, d_c);
// Copy result back to host
cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost);
// Cleanup
free(a); free(b); free(c);
cudaFree(d_a); cudaFree(d_b); cudaFree(d_c);
return 0;
}
Handling Arbitrary Vector Sizes
Update the kernel launch:
add<<<(N + M-1) / M,M>>>(d_a, d_b, d_c, N);
Typical problems are not friendly multiples of blockDim.x
Avoid accessing beyond the end of the arrays:
__global__ void add(int *a, int *b, int *c, int n) {
int index = threadIdx.x + blockIdx.x * blockDim.x;
if (index < n)
c[index] = a[index] + b[index];
}
Why Bother with Threads?
Threads seem unnecessary
They add a level of complexity
What do we gain?
Unlike parallel blocks, threads have mechanisms to:
Communicate
Synchronize
To look closer, we need a new example…
COOPERATING THREADS
Heterogeneous Computing
Blocks
Threads
Indexing
Shared memory
__syncthreads()
Asynchronous operation
Handling errors
Managing devices
CONCEPTS
1D Stencil
Consider applying a 1D stencil to a 1D array of elements
Each output element is the sum of input elements within a radius
If radius is 3, then each output element is the sum of 7 input
elements:
radius radius
Implementing Within a Block
Each thread processes one output element
blockDim.x elements per block
Input elements are read several times
With radius 3, each input element is read seven times
Sharing Data Between Threads
Terminology: within a block, threads share data via shared memory
Extremely fast on-chip memory, user-managed
Declare using __shared__, allocated per block
Data is not visible to threads in other blocks
Implementing With Shared Memory
Cache data in shared memory
Read (blockDim.x + 2 * radius) input elements from global memory to
shared memory
Compute blockDim.x output elements
Write blockDim.x output elements to global memory
Each block needs a halo of radius elements at each boundary
blockDim.x output elements
halo on left halo on right
Stencil Kernel
__global__ void stencil_1d(int *in, int *out) {
__shared__ int temp[BLOCK_SIZE + 2 * RADIUS];
int gindex = threadIdx.x + blockIdx.x * blockDim.x +RADIUS;
int lindex = threadIdx.x + RADIUS;
// Read input elements into shared memory
temp[lindex] = in[gindex];
if (threadIdx.x < RADIUS) {
temp[lindex - RADIUS] = in[gindex - RADIUS];
temp[lindex + BLOCK_SIZE] =
in[gindex + BLOCK_SIZE];
}
Stencil Kernel
// Apply the stencil
int result = 0;
for (int offset = -RADIUS ; offset <= RADIUS ; offset++)
result += temp[lindex + offset];
// Store the result
out[gindex] = result;
}
Data Race!
§ The stencil example will not work…
§ Suppose thread 15 reads the halo before thread 0 has fetched it…
temp[lindex] = in[gindex];
if (threadIdx.x < RADIUS) {
temp[lindex – RADIUS = in[gindex – RADIUS];
temp[lindex + BLOCK_SIZE] = in[gindex + BLOCK_SIZE];
}
int result = 0;
result += temp[lindex + 1];
Store at temp[18]
Load from temp[19]
Skipped, threadIdx > RADIUS
__syncthreads()
void __syncthreads();
Synchronizes all threads within a block
Used to prevent RAW / WAR / WAW hazards
All threads must reach the barrier
In conditional code, the condition must be uniform across the block
Stencil Kernel
__global__ void stencil_1d(int *in, int *out) {
__shared__ int temp[BLOCK_SIZE + 2 * RADIUS];
int gindex = threadIdx.x + blockIdx.x * blockDim.x;
int lindex = threadIdx.x + radius;
// Read input elements into shared memory
temp[lindex] = in[gindex];
if (threadIdx.x < RADIUS) {
temp[lindex – RADIUS] = in[gindex – RADIUS];
temp[lindex + BLOCK_SIZE] = in[gindex + BLOCK_SIZE];
}
// Synchronize (ensure all the data is available)
__syncthreads();
Stencil Kernel
// Apply the stencil
int result = 0;
for (int offset = -RADIUS ; offset <= RADIUS ; offset++)
result += temp[lindex + offset];
// Store the result
out[gindex] = result;
}
Review (1 of 2)
Launching parallel threads
Launch N blocks with M threads per block with kernel<<<N,M>>>(…);
Use blockIdx.x to access block index within grid
Use threadIdx.x to access thread index within block
Allocate elements to threads:
int index = threadIdx.x + blockIdx.x * blockDim.x;
Review (2 of 2)
Use __shared__ to declare a variable/array in shared memory
Data is shared between threads in a block
Not visible to threads in other blocks
Use __syncthreads() as a barrier
Use to prevent data hazards
MANAGING THE DEVICE
Heterogeneous Computing
Blocks
Threads
Indexing
Shared memory
__syncthreads()
Asynchronous operation
Handling errors
Managing devices
CONCEPTS
Coordinating Host & Device
Kernel launches are asynchronous
Control returns to the CPU immediately
CPU needs to synchronize before consuming the results
cudaMemcpy() Blocks the CPU until the copy is complete
Copy begins when all preceding CUDA calls have completed
cudaMemcpyAsync() Asynchronous, does not block the CPU
cudaDeviceSynchronize() Blocks the CPU until all preceding CUDA calls have completed
Reporting Errors
All CUDA API calls return an error code (cudaError_t)
Error in the API call itself
OR
Error in an earlier asynchronous operation (e.g. kernel)
Get the error code for the last error:
cudaError_t cudaGetLastError(void)
Get a string to describe the error:
char *cudaGetErrorString(cudaError_t)
printf("%sn", cudaGetErrorString(cudaGetLastError()));
Device Management
Application can query and select GPUs
cudaGetDeviceCount(int *count)
cudaSetDevice(int device)
cudaGetDevice(int *device)
cudaGetDeviceProperties(cudaDeviceProp *prop, int device)
Multiple threads can share a device
A single thread can manage multiple devices
cudaSetDevice(i) to select current device
cudaMemcpy(…) for peer-to-peer copies
requires OS and device support
Introduction to CUDA C/C++
What have we learned?
Write and launch CUDA C/C++ kernels
__global__, blockIdx.x, threadIdx.x, <<<>>>
Manage GPU memory
cudaMalloc(), cudaMemcpy(), cudaFree()
Manage communication and synchronization
__shared__, __syncthreads()
cudaMemcpy() vs cudaMemcpyAsync(), cudaDeviceSynchronize()
Compute Capability
The compute capability of a device describes its architecture, e.g.
Number of registers
Sizes of memories
Features & capabilities
The following presentations concentrate on Fermi devices
Compute Capability >= 2.0
Compute
Capability
Selected Features
(see CUDA C Programming Guide for complete list)
Tesla models
1.0 Fundamental CUDA support 870
1.3 Double precision, improved memory accesses, atomics 10-series
2.0 Caches, fused multiply-add, 3D grids, surfaces, ECC, P2P,
concurrent kernels/copies, function pointers, recursion
20-series
IDs and Dimensions
A kernel is launched as a grid of
blocks of threads
blockIdx and threadIdx
are 3D
We showed only one
dimension (x)
Built-in variables:
threadIdx
blockIdx
blockDim
gridDim
Device
Grid 1
Block
(0,0,0)
Block
(1,0,0)
Block
(2,0,0)
Block
(1,1,0)
Block
(2,1,0)
Block
(0,1,0)
Block (1,1,0)
Thread
(0,0,0)
Thread
(1,0,0)
Thread
(2,0,0)
Thread
(3,0,0)
Thread
(4,0,0)
Thread
(0,1,0)
Thread
(1,1,0)
Thread
(2,1,0)
Thread
(3,1,0)
Thread
(4,1,0)
Thread
(0,2,0)
Thread
(1,2,0)
Thread
(2,2,0)
Thread
(3,2,0)
Thread
(4,2,0)
Topics we skipped
We skipped some details, you can learn more:
CUDA Programming Guide
CUDA Zone – tools, training, webinars and more
http://developer.nvidia.com/cuda
Need a quick primer for later:
Multi-dimensional indexing
Textures
Questions?

More Related Content

PDF
Introduction to CUDA C: NVIDIA : Notes
PPTX
introduction to CUDA_C.pptx it is widely used
PPTX
Introduction_to_CUDA_C_simple et parfiat.pptx
PDF
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
PDF
Cuda introduction
PPTX
2011.02.18 marco parenzan - modelli di programmazione per le gpu
PDF
NVIDIA cuda programming, open source and AI
PDF
CUDA lab's slides of "parallel programming" course
Introduction to CUDA C: NVIDIA : Notes
introduction to CUDA_C.pptx it is widely used
Introduction_to_CUDA_C_simple et parfiat.pptx
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
Cuda introduction
2011.02.18 marco parenzan - modelli di programmazione per le gpu
NVIDIA cuda programming, open source and AI
CUDA lab's slides of "parallel programming" course

Similar to Tema3_Introduction_to_CUDA_C.pdf (20)

PPT
Intro2 Cuda Moayad
PDF
Deep Learning Edge
PDF
CUDA Deep Dive
PDF
Introduction to CUDA
PPTX
Parallel Futures of a Game Engine
PPTX
Track c-High speed transaction-based hw-sw coverification -eve
PDF
DEF CON 23 - Phil Polstra - one device to pwn them all
PDF
main.pdf java programming practice for programs
PPTX
Roll your own toy unix clone os
PPT
Cuda 2011
PPT
Vpu technology &gpgpu computing
PPT
Vpu technology &gpgpu computing
PPT
Vpu technology &gpgpu computing
PDF
Kato Mivule: An Overview of CUDA for High Performance Computing
PDF
Study on Android Emulator
PPT
Vpu technology &gpgpu computing
PPTX
Lec05 buffers basic_examples
PDF
Qe Reference
PDF
Andes open cl for RISC-V
PDF
GPU Computing with CUDA
Intro2 Cuda Moayad
Deep Learning Edge
CUDA Deep Dive
Introduction to CUDA
Parallel Futures of a Game Engine
Track c-High speed transaction-based hw-sw coverification -eve
DEF CON 23 - Phil Polstra - one device to pwn them all
main.pdf java programming practice for programs
Roll your own toy unix clone os
Cuda 2011
Vpu technology &gpgpu computing
Vpu technology &gpgpu computing
Vpu technology &gpgpu computing
Kato Mivule: An Overview of CUDA for High Performance Computing
Study on Android Emulator
Vpu technology &gpgpu computing
Lec05 buffers basic_examples
Qe Reference
Andes open cl for RISC-V
GPU Computing with CUDA
Ad

Recently uploaded (20)

PPT
NO000387 (1).pptsbsnsnsnsnsnsnsmsnnsnsnsjsnnsnsnsnnsnnansnwjwnshshshs
PPTX
Job-opportunities lecture about it skills
PPTX
A slide for students with the advantagea
PPTX
Autonomic_Nervous_SystemM_Drugs_PPT.pptx
PDF
iTop VPN Crack Latest Version 2025 Free Download With Keygen
PPTX
Nervous_System_Drugs_PPT.pptxXXXXXXXXXXXXXXXXX
PPTX
Slideham presentation for the students a
PPTX
PE3-WEEK-3sdsadsadasdadadwadwdsdddddd.pptx
PDF
Biography of Mohammad Anamul Haque Nayan
PPTX
CYBER SECURITY PPT.pptx CYBER SECURITY APPLICATION AND USAGE
PPTX
Surgical thesis protocol formation ppt.pptx
PDF
APNCET2025RESULT Result Result 2025 2025
PDF
Why Today’s Brands Need ORM & SEO Specialists More Than Ever.pdf
PDF
Understanding the Rhetorical Situation Presentation in Blue Orange Muted Il_2...
PPTX
DPT-MAY24.pptx for review and ucploading
PPTX
microtomy kkk. presenting to cryst in gl
PPTX
internship presentation of bsnl in colllege
PPTX
Definition and Relation of Food Science( Lecture1).pptx
PPT
ALLIED MATHEMATICS -I UNIT III MATRICES.ppt
PPT
notes_Lecture2 23l3j2 dfjl dfdlkj d 2.ppt
NO000387 (1).pptsbsnsnsnsnsnsnsmsnnsnsnsjsnnsnsnsnnsnnansnwjwnshshshs
Job-opportunities lecture about it skills
A slide for students with the advantagea
Autonomic_Nervous_SystemM_Drugs_PPT.pptx
iTop VPN Crack Latest Version 2025 Free Download With Keygen
Nervous_System_Drugs_PPT.pptxXXXXXXXXXXXXXXXXX
Slideham presentation for the students a
PE3-WEEK-3sdsadsadasdadadwadwdsdddddd.pptx
Biography of Mohammad Anamul Haque Nayan
CYBER SECURITY PPT.pptx CYBER SECURITY APPLICATION AND USAGE
Surgical thesis protocol formation ppt.pptx
APNCET2025RESULT Result Result 2025 2025
Why Today’s Brands Need ORM & SEO Specialists More Than Ever.pdf
Understanding the Rhetorical Situation Presentation in Blue Orange Muted Il_2...
DPT-MAY24.pptx for review and ucploading
microtomy kkk. presenting to cryst in gl
internship presentation of bsnl in colllege
Definition and Relation of Food Science( Lecture1).pptx
ALLIED MATHEMATICS -I UNIT III MATRICES.ppt
notes_Lecture2 23l3j2 dfjl dfdlkj d 2.ppt
Ad

Tema3_Introduction_to_CUDA_C.pdf

  • 1. CUDA C/C++ BASICS Procesamiento de datos A Gran Escala- MUCD
  • 2. 512 Scalar Processor (SP) cores execute parallel thread instructions 16 Streaming Multiprocessors (SMs) each contains 32 scalar processors 32 fp32 / int32 ops / clock, 16 fp64 ops / clock 4 Special Function Units (SFUs) Shared register file (128KB) 48 KB / 16 KB Shared memory 16KB / 48 KB L1 data cache GPU : 20-Series Architecture
  • 3. Software Hardware Threads are executed by scalar processors Thread Scalar Processor Thread Block Multiprocessor Thread blocks are executed on multiprocessors Thread blocks do not migrate Several concurrent thread blocks can reside on one multiprocessor - limited by multiprocessor resources (shared memory and register file) ... Grid Device A kernel is launched as a grid of thread blocks Execution Model
  • 4. Thread Block Multiprocessor 32 Threads 32 Threads 32 Threads ... Warps A thread block consists of 32-thread warps A warp is executed physically in parallel (SIMD) on a multiprocessor = Warps
  • 6. What is CUDA? CUDA Architecture Expose GPU parallelism for general-purpose computing Retain performance CUDA C/C++ Based on industry-standard C/C++ Small set of extensions to enable heterogeneous programming Straightforward APIs to manage devices, memory etc. This session introduces CUDA C/C++
  • 8. HELLO WORLD! Heterogeneous Computing Blocks Threads Indexing Shared memory __syncthreads() Asynchronous operation Handling errors Managing devices CONCEPTS
  • 9. Heterogeneous Computing § Terminology: § Host The CPU and its memory (host memory) § Device The GPU and its memory (device memory) Host Device
  • 10. Heterogeneous Computing #include <iostream> #include <algorithm> using namespace std; #define N 1024 #define RADIUS 3 #define BLOCK_SIZE 16 __global__ void stencil_1d(int *in, int *out) { __shared__ int temp[BLOCK_SIZE + 2 * RADIUS]; int gindex = threadIdx.x + blockIdx.x * blockDim.x; int lindex = threadIdx.x + RADIUS; // Read input elements into shared memory temp[lindex] = in[gindex]; if (threadIdx.x < RADIUS) { temp[lindex - RADIUS] = in[gindex - RADIUS]; temp[lindex + BLOCK_SIZE] = in[gindex + BLOCK_SIZE]; } // Synchronize (ensure all the data is available) __syncthreads(); // Apply the stencil int result = 0; for (int offset = -RADIUS ; offset <= RADIUS ; offset++) result += temp[lindex + offset]; // Store the result out[gindex] = result; } void fill_ints(int *x, int n) { fill_n(x, n, 1); } int main(void) { int *in, *out; // host copies of a, b, c int *d_in, *d_out; // device copies of a, b, c int size = (N + 2*RADIUS) * sizeof(int); // Alloc space for host copies and setup values in = (int *)malloc(size); fill_ints(in, N + 2*RADIUS); out = (int *)malloc(size); fill_ints(out, N + 2*RADIUS); // Alloc space for device copies cudaMalloc((void **)&d_in, size); cudaMalloc((void **)&d_out, size); // Copy to device cudaMemcpy(d_in, in, size, cudaMemcpyHostToDevice); cudaMemcpy(d_out, out, size, cudaMemcpyHostToDevice); // Launch stencil_1d() kernel on GPU stencil_1d<<<N/BLOCK_SIZE,BLOCK_SIZE>>>(d_in + RADIUS, d_out + RADIUS); // Copy result back to host cudaMemcpy(out, d_out, size, cudaMemcpyDeviceToHost); // Cleanup free(in); free(out); cudaFree(d_in); cudaFree(d_out); return 0; } serial code parallel code serial code parallel fn
  • 11. Simple Processing Flow 1. Copy input data from CPU memory to GPU memory PCI Bus
  • 12. Simple Processing Flow 1. Copy input data from CPU memory to GPU memory 2. Load GPU program and execute, caching data on chip for performance PCI Bus
  • 13. Simple Processing Flow 1. Copy input data from CPU memory to GPU memory 2. Load GPU program and execute, caching data on chip for performance 3. Copy results from GPU memory to CPU memory PCI Bus
  • 14. Hello World! int main(void) { printf("Hello World!n"); return 0; } Standard C that runs on the host NVIDIA compiler (nvcc) can be used to compile programs with no device code Output: $ nvcc hello_world.cu $ a.out Hello World! $
  • 15. Hello World! with Device Code __global__ void mykernel(void) { } int main(void) { mykernel<<<1,1>>>(); printf("Hello World!n"); return 0; } § Two new syntactic elements…
  • 16. Hello World! with Device Code __global__ void mykernel(void) { } CUDA C/C++ keyword __global__ indicates a function that: Runs on the device Is called from host code nvcc separates source code into host and device components Device functions (e.g. mykernel()) processed by NVIDIA compiler Host functions (e.g. main()) processed by standard host compiler gcc, cl.exe
  • 17. Hello World! with Device Code mykernel<<<1,1>>>(); Triple angle brackets mark a call from host code to device code Also called a “kernel launch” We’ll return to the parameters (1,1) in a moment That’s all that is required to execute a function on the GPU!
  • 18. Hello World! with Device Code __global__ void mykernel(void) { } int main(void) { mykernel<<<1,1>>>(); printf("Hello World!n"); return 0; } mykernel() does nothing, somewhat anticlimactic! Output: $ nvcc hello.cu $ a.out Hello World! $
  • 19. Parallel Programming in CUDA C/C++ § But wait… GPU computing is about massive parallelism! § We need a more interesting example… § We’ll start by adding two integers and build up to vector addition a b c
  • 20. Addition on the Device A simple kernel to add two integers __global__ void add(int *a, int *b, int *c) { *c = *a + *b; } As before __global__ is a CUDA C/C++ keyword meaning add() will execute on the device add() will be called from the host
  • 21. Addition on the Device Note that we use pointers for the variables __global__ void add(int *a, int *b, int *c) { *c = *a + *b; } add() runs on the device, so a, b and c must point to device memory We need to allocate memory on the GPU
  • 22. Memory Management Host and device memory are separate entities Device pointers point to GPU memory May be passed to/from host code May not be dereferenced in host code Host pointers point to CPU memory May be passed to/from device code May not be dereferenced in device code Simple CUDA API for handling device memory cudaMalloc(), cudaFree(), cudaMemcpy() Similar to the C equivalents malloc(), free(), memcpy()
  • 23. Addition on the Device: add() Returning to our add() kernel __global__ void add(int *a, int *b, int *c) { *c = *a + *b; } Let’s take a look at main()…
  • 24. Addition on the Device: main() int main(void) { int a, b, c; // host copies of a, b, c int *d_a, *d_b, *d_c; // device copies of a, b, c int size = sizeof(int); // Allocate space for device copies of a, b, c cudaMalloc((void **)&d_a, size); cudaMalloc((void **)&d_b, size); cudaMalloc((void **)&d_c, size); // Setup input values a = 2; b = 7;
  • 25. Addition on the Device: main() // Copy inputs to device cudaMemcpy(d_a, &a, size, cudaMemcpyHostToDevice); cudaMemcpy(d_b, &b, size, cudaMemcpyHostToDevice); // Launch add() kernel on GPU add<<<1,1>>>(d_a, d_b, d_c); // Copy result back to host cudaMemcpy(&c, d_c, size, cudaMemcpyDeviceToHost); // Cleanup cudaFree(d_a); cudaFree(d_b); cudaFree(d_c); return 0; }
  • 26. RUNNING IN PARALLEL Heterogeneous Computing Blocks Threads Indexing Shared memory __syncthreads() Asynchronous operation Handling errors Managing devices CONCEPTS
  • 27. Moving to Parallel GPU computing is about massive parallelism So how do we run code in parallel on the device? add<<< 1, 1 >>>(); add<<< N, 1 >>>(); Instead of executing add() once, execute N times in parallel
  • 28. Vector Addition on the Device With add() running in parallel we can do vector addition Terminology: each parallel invocation of add() is referred to as a block The set of blocks is referred to as a grid Each invocation can refer to its block index using blockIdx.x __global__ void add(int *a, int *b, int *c) { c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x]; } By using blockIdx.x to index into the array, each block handles a different index
  • 29. Vector Addition on the Device __global__ void add(int *a, int *b, int *c) { c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x]; } On the device, each block can execute in parallel: c[0] = a[0] + b[0]; c[1] = a[1] + b[1]; c[2] = a[2] + b[2]; c[3] = a[3] + b[3]; Block 0 Block 1 Block 2 Block 3
  • 30. Vector Addition on the Device: add() Returning to our parallelized add() kernel __global__ void add(int *a, int *b, int *c) { c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x]; } Let’s take a look at main()…
  • 31. Vector Addition on the Device: main() #define N 512 int main(void) { int *a, *b, *c; // host copies of a, b, c int *d_a, *d_b, *d_c; // device copies of a, b, c int size = N * sizeof(int); // Alloc space for device copies of a, b, c cudaMalloc((void **)&d_a, size); cudaMalloc((void **)&d_b, size); cudaMalloc((void **)&d_c, size); // Alloc space for host copies of a, b, c and setup input values a = (int *)malloc(size); random_ints(a, N); b = (int *)malloc(size); random_ints(b, N); c = (int *)malloc(size);
  • 32. Vector Addition on the Device: main() // Copy inputs to device cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice); cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice); // Launch add() kernel on GPU with N blocks add<<<N,1>>>(d_a, d_b, d_c); // Copy result back to host cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost); // Cleanup free(a); free(b); free(c); cudaFree(d_a); cudaFree(d_b); cudaFree(d_c); return 0; }
  • 33. Review (1 of 2) Difference between host and device Host CPU Device GPU Using __global__ to declare a function as device code Executes on the device Called from the host Passing parameters from host code to a device function
  • 34. Review (2 of 2) Basic device memory management cudaMalloc() cudaMemcpy() cudaFree() Launching parallel kernels Launch N copies of add() with add<<<N,1>>>(…); Use blockIdx.x to access block index
  • 35. INTRODUCING THREADS Heterogeneous Computing Blocks Threads Indexing Shared memory __syncthreads() Asynchronous operation Handling errors Managing devices CONCEPTS
  • 36. CUDA Threads Terminology: a block can be split into parallel threads Let’s change add() to use parallel threads instead of parallel blocks We use threadIdx.x instead of blockIdx.x Need to make one change in main()… __global__ void add(int *a, int *b, int *c) { c[threadIdx.x] = a[threadIdx.x] + b[threadIdx.x]; }
  • 37. Vector Addition Using Threads: main() #define N 512 int main(void) { int *a, *b, *c; // host copies of a, b, c int *d_a, *d_b, *d_c; // device copies of a, b, c int size = N * sizeof(int); // Alloc space for device copies of a, b, c cudaMalloc((void **)&d_a, size); cudaMalloc((void **)&d_b, size); cudaMalloc((void **)&d_c, size); // Alloc space for host copies of a, b, c and setup input values a = (int *)malloc(size); random_ints(a, N); b = (int *)malloc(size); random_ints(b, N); c = (int *)malloc(size);
  • 38. Vector Addition Using Threads: main() // Copy inputs to device cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice); cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice); // Launch add() kernel on GPU with N threads add<<<1,N>>>(d_a, d_b, d_c); // Copy result back to host cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost); // Cleanup free(a); free(b); free(c); cudaFree(d_a); cudaFree(d_b); cudaFree(d_c); return 0; }
  • 39. COMBINING THREADS AND BLOCKS Heterogeneous Computing Blocks Threads Indexing Shared memory __syncthreads() Asynchronous operation Handling errors Managing devices CONCEPTS
  • 40. Combining Blocks and Threads We’ve seen parallel vector addition using: Many blocks with one thread each One block with many threads Let’s adapt vector addition to use both blocks and threads Why? We’ll come to that… First let’s discuss data indexing…
  • 41. 0 1 7 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 Indexing Arrays with Blocks and Threads With M threads/block a unique index for each thread is given by: int index = threadIdx.x + blockIdx.x * M; No longer as simple as using blockIdx.x and threadIdx.x Consider indexing an array with one element per thread (8 threads/block) threadIdx.x threadIdx.x threadIdx.x threadIdx.x blockIdx.x = 0 blockIdx.x = 1 blockIdx.x = 2 blockIdx.x = 3
  • 42. Indexing Arrays: Example Which thread will operate on the red element? int index = threadIdx.x + blockIdx.x * M; = 5 + 2 * 8; = 21; 0 1 7 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 threadIdx.x = 5 blockIdx.x = 2 0 1 31 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 M = 8
  • 43. Vector Addition with Blocks and Threads What changes need to be made in main()? Use the built-in variable blockDim.x for threads per block int index = threadIdx.x + blockIdx.x * blockDim.x; Combined version of add() to use parallel threads and parallel blocks __global__ void add(int *a, int *b, int *c) { int index = threadIdx.x + blockIdx.x * blockDim.x; c[index] = a[index] + b[index]; }
  • 44. Addition with Blocks and Threads: main() #define N (2048*2048) #define THREADS_PER_BLOCK 512 int main(void) { int *a, *b, *c; // host copies of a, b, c int *d_a, *d_b, *d_c; // device copies of a, b, c int size = N * sizeof(int); // Alloc space for device copies of a, b, c cudaMalloc((void **)&d_a, size); cudaMalloc((void **)&d_b, size); cudaMalloc((void **)&d_c, size); // Alloc space for host copies of a, b, c and setup input values a = (int *)malloc(size); random_ints(a, N); b = (int *)malloc(size); random_ints(b, N); c = (int *)malloc(size);
  • 45. Addition with Blocks and Threads: main() // Copy inputs to device cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice); cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice); // Launch add() kernel on GPU add<<<N/THREADS_PER_BLOCK,THREADS_PER_BLOCK>>>(d_a, d_b, d_c); // Copy result back to host cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost); // Cleanup free(a); free(b); free(c); cudaFree(d_a); cudaFree(d_b); cudaFree(d_c); return 0; }
  • 46. Handling Arbitrary Vector Sizes Update the kernel launch: add<<<(N + M-1) / M,M>>>(d_a, d_b, d_c, N); Typical problems are not friendly multiples of blockDim.x Avoid accessing beyond the end of the arrays: __global__ void add(int *a, int *b, int *c, int n) { int index = threadIdx.x + blockIdx.x * blockDim.x; if (index < n) c[index] = a[index] + b[index]; }
  • 47. Why Bother with Threads? Threads seem unnecessary They add a level of complexity What do we gain? Unlike parallel blocks, threads have mechanisms to: Communicate Synchronize To look closer, we need a new example…
  • 48. COOPERATING THREADS Heterogeneous Computing Blocks Threads Indexing Shared memory __syncthreads() Asynchronous operation Handling errors Managing devices CONCEPTS
  • 49. 1D Stencil Consider applying a 1D stencil to a 1D array of elements Each output element is the sum of input elements within a radius If radius is 3, then each output element is the sum of 7 input elements: radius radius
  • 50. Implementing Within a Block Each thread processes one output element blockDim.x elements per block Input elements are read several times With radius 3, each input element is read seven times
  • 51. Sharing Data Between Threads Terminology: within a block, threads share data via shared memory Extremely fast on-chip memory, user-managed Declare using __shared__, allocated per block Data is not visible to threads in other blocks
  • 52. Implementing With Shared Memory Cache data in shared memory Read (blockDim.x + 2 * radius) input elements from global memory to shared memory Compute blockDim.x output elements Write blockDim.x output elements to global memory Each block needs a halo of radius elements at each boundary blockDim.x output elements halo on left halo on right
  • 53. Stencil Kernel __global__ void stencil_1d(int *in, int *out) { __shared__ int temp[BLOCK_SIZE + 2 * RADIUS]; int gindex = threadIdx.x + blockIdx.x * blockDim.x +RADIUS; int lindex = threadIdx.x + RADIUS; // Read input elements into shared memory temp[lindex] = in[gindex]; if (threadIdx.x < RADIUS) { temp[lindex - RADIUS] = in[gindex - RADIUS]; temp[lindex + BLOCK_SIZE] = in[gindex + BLOCK_SIZE]; }
  • 54. Stencil Kernel // Apply the stencil int result = 0; for (int offset = -RADIUS ; offset <= RADIUS ; offset++) result += temp[lindex + offset]; // Store the result out[gindex] = result; }
  • 55. Data Race! § The stencil example will not work… § Suppose thread 15 reads the halo before thread 0 has fetched it… temp[lindex] = in[gindex]; if (threadIdx.x < RADIUS) { temp[lindex – RADIUS = in[gindex – RADIUS]; temp[lindex + BLOCK_SIZE] = in[gindex + BLOCK_SIZE]; } int result = 0; result += temp[lindex + 1]; Store at temp[18] Load from temp[19] Skipped, threadIdx > RADIUS
  • 56. __syncthreads() void __syncthreads(); Synchronizes all threads within a block Used to prevent RAW / WAR / WAW hazards All threads must reach the barrier In conditional code, the condition must be uniform across the block
  • 57. Stencil Kernel __global__ void stencil_1d(int *in, int *out) { __shared__ int temp[BLOCK_SIZE + 2 * RADIUS]; int gindex = threadIdx.x + blockIdx.x * blockDim.x; int lindex = threadIdx.x + radius; // Read input elements into shared memory temp[lindex] = in[gindex]; if (threadIdx.x < RADIUS) { temp[lindex – RADIUS] = in[gindex – RADIUS]; temp[lindex + BLOCK_SIZE] = in[gindex + BLOCK_SIZE]; } // Synchronize (ensure all the data is available) __syncthreads();
  • 58. Stencil Kernel // Apply the stencil int result = 0; for (int offset = -RADIUS ; offset <= RADIUS ; offset++) result += temp[lindex + offset]; // Store the result out[gindex] = result; }
  • 59. Review (1 of 2) Launching parallel threads Launch N blocks with M threads per block with kernel<<<N,M>>>(…); Use blockIdx.x to access block index within grid Use threadIdx.x to access thread index within block Allocate elements to threads: int index = threadIdx.x + blockIdx.x * blockDim.x;
  • 60. Review (2 of 2) Use __shared__ to declare a variable/array in shared memory Data is shared between threads in a block Not visible to threads in other blocks Use __syncthreads() as a barrier Use to prevent data hazards
  • 61. MANAGING THE DEVICE Heterogeneous Computing Blocks Threads Indexing Shared memory __syncthreads() Asynchronous operation Handling errors Managing devices CONCEPTS
  • 62. Coordinating Host & Device Kernel launches are asynchronous Control returns to the CPU immediately CPU needs to synchronize before consuming the results cudaMemcpy() Blocks the CPU until the copy is complete Copy begins when all preceding CUDA calls have completed cudaMemcpyAsync() Asynchronous, does not block the CPU cudaDeviceSynchronize() Blocks the CPU until all preceding CUDA calls have completed
  • 63. Reporting Errors All CUDA API calls return an error code (cudaError_t) Error in the API call itself OR Error in an earlier asynchronous operation (e.g. kernel) Get the error code for the last error: cudaError_t cudaGetLastError(void) Get a string to describe the error: char *cudaGetErrorString(cudaError_t) printf("%sn", cudaGetErrorString(cudaGetLastError()));
  • 64. Device Management Application can query and select GPUs cudaGetDeviceCount(int *count) cudaSetDevice(int device) cudaGetDevice(int *device) cudaGetDeviceProperties(cudaDeviceProp *prop, int device) Multiple threads can share a device A single thread can manage multiple devices cudaSetDevice(i) to select current device cudaMemcpy(…) for peer-to-peer copies requires OS and device support
  • 65. Introduction to CUDA C/C++ What have we learned? Write and launch CUDA C/C++ kernels __global__, blockIdx.x, threadIdx.x, <<<>>> Manage GPU memory cudaMalloc(), cudaMemcpy(), cudaFree() Manage communication and synchronization __shared__, __syncthreads() cudaMemcpy() vs cudaMemcpyAsync(), cudaDeviceSynchronize()
  • 66. Compute Capability The compute capability of a device describes its architecture, e.g. Number of registers Sizes of memories Features & capabilities The following presentations concentrate on Fermi devices Compute Capability >= 2.0 Compute Capability Selected Features (see CUDA C Programming Guide for complete list) Tesla models 1.0 Fundamental CUDA support 870 1.3 Double precision, improved memory accesses, atomics 10-series 2.0 Caches, fused multiply-add, 3D grids, surfaces, ECC, P2P, concurrent kernels/copies, function pointers, recursion 20-series
  • 67. IDs and Dimensions A kernel is launched as a grid of blocks of threads blockIdx and threadIdx are 3D We showed only one dimension (x) Built-in variables: threadIdx blockIdx blockDim gridDim Device Grid 1 Block (0,0,0) Block (1,0,0) Block (2,0,0) Block (1,1,0) Block (2,1,0) Block (0,1,0) Block (1,1,0) Thread (0,0,0) Thread (1,0,0) Thread (2,0,0) Thread (3,0,0) Thread (4,0,0) Thread (0,1,0) Thread (1,1,0) Thread (2,1,0) Thread (3,1,0) Thread (4,1,0) Thread (0,2,0) Thread (1,2,0) Thread (2,2,0) Thread (3,2,0) Thread (4,2,0)
  • 68. Topics we skipped We skipped some details, you can learn more: CUDA Programming Guide CUDA Zone – tools, training, webinars and more http://developer.nvidia.com/cuda Need a quick primer for later: Multi-dimensional indexing Textures