SlideShare a Scribd company logo
CUDA C/C++ BASICS
NVIDIA Corporation
© NVIDIA 2013
What is CUDA?
• CUDA Architecture
– Expose GPU parallelism for general-purpose computing
– Retain performance
• CUDA C/C++
– Based on industry-standard C/C++
– Small set of extensions to enable heterogeneous
programming
– Straightforward APIs to manage devices, memory etc.
• This session introduces CUDA C/C++
© NVIDIA 2013
Introduction to CUDA C/C++
• What will you learn in this session?
– Start from “Hello World!”
– Write and launch CUDA C/C++ kernels
– Manage GPU memory
– Manage communication and synchronization
© NVIDIA 2013
Prerequisites
• You (probably) need experience with C or C++
• You don’t need GPU experience
• You don’t need parallel programming
experience
• You don’t need graphics experience
© NVIDIA 2013
Heterogeneous Computing
Blocks
Threads
Indexing
Shared memory
__syncthreads()
Asynchronous operation
Handling errors
Managing devices
CONCEPTS
© NVIDIA 2013
HELLO WORLD!
Heterogeneous Computing
Blocks
Threads
Indexing
Shared memory
__syncthreads()
Asynchronous operation
Handling errors
Managing devices
CONCEPTS
Heterogeneous Computing
 Terminology:
 Host The CPU and its memory (host memory)
 Device The GPU and its memory (device memory)
Host Device
© NVIDIA 2013
Heterogeneous Computing
#include <iostream>
#include <algorithm>
using namespace std;
#define N 1024
#define RADIUS 3
#define BLOCK_SIZE 16
__global__ void stencil_1d(int *in, int *out) {
__shared__ int temp[BLOCK_SIZE + 2 * RADIUS];
int gindex = threadIdx.x + blockIdx.x * blockDim.x;
int lindex = threadIdx.x + RADIUS;
// Read input elements into shared memory
temp[lindex] = in[gindex];
if (threadIdx.x < RADIUS) {
temp[lindex - RADIUS] = in[gindex - RADIUS];
temp[lindex + BLOCK_SIZE] = in[gindex + BLOCK_SIZE];
}
// Synchronize (ensure all the data is available)
__syncthreads();
// Apply the stencil
int result = 0;
for (int offset = -RADIUS ; offset <= RADIUS ; offset++)
result += temp[lindex + offset];
// Store the result
out[gindex] = result;
}
void fill_ints(int *x, int n) {
fill_n(x, n, 1);
}
int main(void) {
int *in, *out; // host copies of a, b, c
int *d_in, *d_out; // device copies of a, b, c
int size = (N + 2*RADIUS) * sizeof(int);
// Alloc space for host copies and setup values
in = (int *)malloc(size); fill_ints(in, N + 2*RADIUS);
out = (int *)malloc(size); fill_ints(out, N + 2*RADIUS);
// Alloc space for device copies
cudaMalloc((void **)&d_in, size);
cudaMalloc((void **)&d_out, size);
// Copy to device
cudaMemcpy(d_in, in, size, cudaMemcpyHostToDevice);
cudaMemcpy(d_out, out, size, cudaMemcpyHostToDevice);
// Launch stencil_1d() kernel on GPU
stencil_1d<<<N/BLOCK_SIZE,BLOCK_SIZE>>>(d_in + RADIUS,
d_out + RADIUS);
// Copy result back to host
cudaMemcpy(out, d_out, size, cudaMemcpyDeviceToHost);
// Cleanup
free(in); free(out);
cudaFree(d_in); cudaFree(d_out);
return 0;
}
serial code
parallel code
serial code
parallel fn
© NVIDIA 2013
Simple Processing Flow
1. Copy input data from CPU memory
to GPU memory
PCI Bus
© NVIDIA 2013
Simple Processing Flow
1. Copy input data from CPU memory
to GPU memory
2. Load GPU program and execute,
caching data on chip for
performance
© NVIDIA 2013
PCI Bus
Simple Processing Flow
1. Copy input data from CPU memory
to GPU memory
2. Load GPU program and execute,
caching data on chip for
performance
3. Copy results from GPU memory to
CPU memory
© NVIDIA 2013
PCI Bus
Hello World!
int main(void) {
printf("Hello World!n");
return 0;
}
Standard C that runs on the host
NVIDIA compiler (nvcc) can be used
to compile programs with no device
code
Output:
$ nvcc
hello_world.
cu
$ a.out
Hello World!
$
© NVIDIA 2013
Hello World! with Device Code
__global__ void mykernel(void) {
}
int main(void) {
mykernel<<<1,1>>>();
printf("Hello World!n");
return 0;
}
 Two new syntactic elements…
© NVIDIA 2013
Hello World! with Device Code
__global__ void mykernel(void) {
}
• CUDA C/C++ keyword __global__ indicates a function that:
– Runs on the device
– Is called from host code
• nvcc separates source code into host and device
components
– Device functions (e.g. mykernel()) processed by NVIDIA compiler
– Host functions (e.g. main()) processed by standard host compiler
• gcc, cl.exe
© NVIDIA 2013
Hello World! with Device COde
mykernel<<<1,1>>>();
• Triple angle brackets mark a call from host
code to device code
– Also called a “kernel launch”
– We’ll return to the parameters (1,1) in a moment
• That’s all that is required to execute a function
on the GPU!
© NVIDIA 2013
Hello World! with Device Code
__global__ void mykernel(void){
}
int main(void) {
mykernel<<<1,1>>>();
printf("Hello World!n");
return 0;
}
• mykernel() does nothing,
somewhat anticlimactic!
Output:
$ nvcc
hello.cu
$ a.out
Hello World!
$
© NVIDIA 2013
Parallel Programming in CUDA C/C++
• But wait… GPU computing is about
massive parallelism!
• We need a more interesting example…
• We’ll start by adding two integers and
build up to vector addition
a b c
© NVIDIA 2013
Addition on the Device
• A simple kernel to add two integers
__global__ void add(int *a, int *b, int *c) {
*c = *a + *b;
}
• As before __global__ is a CUDA C/C++ keyword
meaning
– add() will execute on the device
– add() will be called from the host
© NVIDIA 2013
Addition on the Device
• Note that we use pointers for the variables
__global__ void add(int *a, int *b, int *c) {
*c = *a + *b;
}
• add() runs on the device, so a, b and c must
point to device memory
• We need to allocate memory on the GPU
© NVIDIA 2013
Memory Management
• Host and device memory are separate entities
– Device pointers point to GPU memory
May be passed to/from host code
May not be dereferenced in host code
– Host pointers point to CPU memory
May be passed to/from device code
May not be dereferenced in device code
• Simple CUDA API for handling device memory
– cudaMalloc(), cudaFree(), cudaMemcpy()
– Similar to the C equivalents malloc(), free(), memcpy()
© NVIDIA 2013
Addition on the Device: add()
• Returning to our add() kernel
__global__ void add(int *a, int *b, int *c) {
*c = *a + *b;
}
• Let’s take a look at main()…
© NVIDIA 2013
Addition on the Device: main()
int main(void) {
int a, b, c; // host copies of a, b, c
int *d_a, *d_b, *d_c; // device copies of a, b, c
int size = sizeof(int);
// Allocate space for device copies of a, b, c
cudaMalloc((void **)&d_a, size);
cudaMalloc((void **)&d_b, size);
cudaMalloc((void **)&d_c, size);
// Setup input values
a = 2;
b = 7;
© NVIDIA 2013
Addition on the Device: main()
// Copy inputs to device
cudaMemcpy(d_a, &a, size, cudaMemcpyHostToDevice);
cudaMemcpy(d_b, &b, size, cudaMemcpyHostToDevice);
// Launch add() kernel on GPU
add<<<1,1>>>(d_a, d_b, d_c);
// Copy result back to host
cudaMemcpy(&c, d_c, size, cudaMemcpyDeviceToHost);
// Cleanup
cudaFree(d_a); cudaFree(d_b); cudaFree(d_c);
return 0;
}
© NVIDIA 2013
RUNNING IN
PARALLEL
Heterogeneous Computing
Blocks
Threads
Indexing
Shared memory
__syncthreads()
Asynchronous operation
Handling errors
Managing devices
CONCEPTS
© NVIDIA 2013
Moving to Parallel
• GPU computing is about massive parallelism
– So how do we run code in parallel on the device?
add<<< 1, 1 >>>();
add<<< N, 1 >>>();
• Instead of executing add() once, execute N
times in parallel
© NVIDIA 2013
Thread Block
Grid
Vector Addition on the Device
• With add() running in parallel we can do vector addition
• Terminology: each parallel invocation of add() is referred to
as a block
– The set of blocks is referred to as a grid
– Each invocation can refer to its block index using blockIdx.x
__global__ void add(int *a, int *b, int *c) {
c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x];
}
• By using blockIdx.x to index into the array, each block handles
a different index
© NVIDIA 2013
Vector Addition on the Device
__global__ void add(int *a, int *b, int *c) {
c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x];
}
• On the device, each block can execute in parallel:
c[0] = a[0] + b[0]; c[1] = a[1] + b[1]; c[2] = a[2] + b[2]; c[3] = a[3] + b[3];
Block 0 Block 1 Block 2 Block 3
© NVIDIA 2013
Declaring Variables
__global__ declares kernel, which is called on
host and executed on device
__device__ declares device function, which is
called and executed on device
__host__ declares host function, which is
called and executed on host
dim3 gridDim dimensions of grid
dim3 blockDim dimensions of block
uint3 blockIdx block index within grid
uint3 threadIdx thread index within block
int warpSize number of threads in warp
Vector Addition on the Device: add()
• Returning to our parallelized add() kernel
__global__ void add(int *a, int *b, int *c) {
c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x];
}
• Let’s take a look at main()…
© NVIDIA 2013
Vector Addition on the Device: main()
#define N 512
int main(void) {
int *a, *b, *c; // host copies of a, b, c
int *d_a, *d_b, *d_c; // device copies of a, b, c
int size = N * sizeof(int);
// Alloc space for device copies of a, b, c
cudaMalloc((void **)&d_a, size);
cudaMalloc((void **)&d_b, size);
cudaMalloc((void **)&d_c, size);
// Alloc space for host copies of a, b, c and setup input values
a = (int *)malloc(size); random_ints(a, N);
b = (int *)malloc(size); random_ints(b, N);
c = (int *)malloc(size);
© NVIDIA 2013
Vector Addition on the Device: main()
// Copy inputs to device
cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice);
cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice);
// Launch add() kernel on GPU with N blocks
add<<<N,1>>>(d_a, d_b, d_c);
// Copy result back to host
cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost);
// Cleanup
free(a); free(b); free(c);
cudaFree(d_a); cudaFree(d_b); cudaFree(d_c);
return 0;
}
© NVIDIA 2013
Review (1 of 2)
• Difference between host and device
– Host CPU
– Device GPU
• Using __global__ to declare a function as device code
– Executes on the device
– Called from the host
• Passing parameters from host code to a device function
© NVIDIA 2013
Review (2 of 2)
• Basic device memory management
– cudaMalloc()
– cudaMemcpy()
– cudaFree()
• Launching parallel kernels
– Launch N copies of add() with add<<<N,1>>>(…);
– Use blockIdx.x to access block index
© NVIDIA 2013
INTRODUCING
THREADS
Heterogeneous Computing
Blocks
Threads
Indexing
Shared memory
__syncthreads()
Asynchronous operation
Handling errors
Managing devices
CONCEPTS
© NVIDIA 2013
CUDA Threads
• Terminology: a block can be split into parallel threads
• Let’s change add() to use parallel threads instead of
parallel blocks
• We use threadIdx.x instead of blockIdx.x
• Need to make one change in main()…
__global__ void add(int *a, int *b, int *c) {
c[threadIdx.x] = a[threadIdx.x] + b[threadIdx.x];
}
© NVIDIA 2013
Vector Addition Using Threads: main()
#define N 512
int main(void) {
int *a, *b, *c; // host copies of a, b, c
int *d_a, *d_b, *d_c; // device copies of a, b, c
int size = N * sizeof(int);
// Alloc space for device copies of a, b, c
cudaMalloc((void **)&d_a, size);
cudaMalloc((void **)&d_b, size);
cudaMalloc((void **)&d_c, size);
// Alloc space for host copies of a, b, c and setup input values
a = (int *)malloc(size); random_ints(a, N);
b = (int *)malloc(size); random_ints(b, N);
c = (int *)malloc(size);
© NVIDIA 2013
Vector Addition Using Threads: main()
// Copy inputs to device
cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice);
cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice);
// Launch add() kernel on GPU with N threads
add<<<1,N>>>(d_a, d_b, d_c);
// Copy result back to host
cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost);
// Cleanup
free(a); free(b); free(c);
cudaFree(d_a); cudaFree(d_b); cudaFree(d_c);
return 0;
}
© NVIDIA 2013
COMBINING THREADS
AND BLOCKS
Heterogeneous Computing
Blocks
Threads
Indexing
Shared memory
__syncthreads()
Asynchronous operation
Handling errors
Managing devices
CONCEPTS
© NVIDIA 2013
Combining Blocks and Threads
• We’ve seen parallel vector addition using:
– Many blocks with one thread each
– One block with many threads
• Let’s adapt vector addition to use both blocks and
threads
• Why? We’ll come to that…
• First let’s discuss data indexing…
© NVIDIA 2013
0 1 7
2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6
Indexing Arrays with Blocks and
Threads
• With M threads/block a unique index for each thread
is given by:
int index = threadIdx.x + blockIdx.x * M;
• No longer as simple as using blockIdx.x and threadIdx.x
– Consider indexing an array with one element per thread (8
threads/block)
threadIdx.x threadIdx.x threadIdx.x threadIdx.x
blockIdx.x = 0 blockIdx.x = 1 blockIdx.x = 2 blockIdx.x = 3
© NVIDIA 2013
Indexing Arrays: Example
• Which thread will operate on the red
element?
int index = threadIdx.x + blockIdx.x * M;
= 5 + 2 * 8;
= 21;
0 1 7
2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6
threadIdx.x = 5
blockIdx.x = 2
0 1 31
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
M = 8
© NVIDIA 2013
Vector Addition with Blocks and
Threads
• What changes need to be made in main()?
• Use the built-in variable blockDim.x for threads per
block
int index = threadIdx.x + blockIdx.x * blockDim.x;
• Combined version of add() to use parallel
threads and parallel blocks
__global__ void add(int *a, int *b, int *c) {
int index = threadIdx.x + blockIdx.x * blockDim.x;
c[index] = a[index] + b[index];
}
© NVIDIA 2013
Addition with Blocks and Threads: main()
#define N (2048*2048)
#define THREADS_PER_BLOCK 512
int main(void) {
int *a, *b, *c; // host copies of a, b, c
int *d_a, *d_b, *d_c; // device copies of a, b, c
int size = N * sizeof(int);
// Alloc space for device copies of a, b, c
cudaMalloc((void **)&d_a, size);
cudaMalloc((void **)&d_b, size);
cudaMalloc((void **)&d_c, size);
// Alloc space for host copies of a, b, c and setup input values
a = (int *)malloc(size); random_ints(a, N);
b = (int *)malloc(size); random_ints(b, N);
c = (int *)malloc(size);
© NVIDIA 2013
Addition with Blocks and Threads: main()
// Copy inputs to device
cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice);
cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice);
// Launch add() kernel on GPU
add<<<N/THREADS_PER_BLOCK,THREADS_PER_BLOCK>>>(d_a, d_b, d_c);
// Copy result back to host
cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost);
// Cleanup
free(a); free(b); free(c);
cudaFree(d_a); cudaFree(d_b); cudaFree(d_c);
return 0;
}
© NVIDIA 2013
Handling Arbitrary Vector Sizes
• Update the kernel launch:
add<<<(N + M-1) / M,M>>>(d_a, d_b, d_c, N);
• Typical problems are not friendly multiples of
blockDim.x
• Avoid accessing beyond the end of the arrays:
__global__ void add(int *a, int *b, int *c, int n) {
int index = threadIdx.x + blockIdx.x * blockDim.x;
if (index < n)
c[index] = a[index] + b[index];
}
© NVIDIA 2013
Why Bother with Threads?
• Threads seem unnecessary
– They add a level of complexity
– What do we gain?
• Unlike parallel blocks, threads have mechanisms
to:
– Communicate
– Synchronize
• To look closer, we need a new example…
© NVIDIA 2013
COOPERATING
THREADS
Heterogeneous Computing
Blocks
Threads
Indexing
Shared memory
__syncthreads()
Asynchronous operation
Handling errors
Managing devices
CONCEPTS
© NVIDIA 2013
Sharing Data Between Threads
• Terminology: within a block, threads share data via
shared memory
• Extremely fast on-chip memory, user-managed
• Declare using __shared__, allocated per block
• Data is not visible to threads in other blocks
© NVIDIA 2013
__syncthreads()
• void __syncthreads();
• Synchronizes all threads within a block
– Used to prevent RAW / WAR / WAW hazards
• All threads must reach the barrier
– In conditional code, the condition must be
uniform across the block
© NVIDIA 2013
Review (1 of 2)
• Launching parallel threads
– Launch N blocks with M threads per block with
kernel<<<N,M>>>(…);
– Use blockIdx.x to access block index within grid
– Use threadIdx.x to access thread index within block
• Allocate elements to threads:
int index = threadIdx.x + blockIdx.x * blockDim.x;
© NVIDIA 2013
Review (2 of 2)
• Use __shared__ to declare a variable/array in
shared memory
– Data is shared between threads in a block
– Not visible to threads in other blocks
• Use __syncthreads() as a barrier
– Use to prevent data hazards
© NVIDIA 2013
MANAGING THE
DEVICE
Heterogeneous Computing
Blocks
Threads
Indexing
Shared memory
__syncthreads()
Asynchronous operation
Handling errors
Managing devices
CONCEPTS
© NVIDIA 2013
Coordinating Host & Device
• Kernel launches are asynchronous
– Control returns to the CPU immediately
• CPU needs to synchronize before consuming the
results
cudaMemcpy() Blocks the CPU until the copy is complete
Copy begins when all preceding CUDA calls have
completed
cudaMemcpyAsync() Asynchronous, does not block the CPU
cudaDeviceSynchro
nize()
Blocks the CPU until all preceding CUDA calls have
completed
© NVIDIA 2013
Reporting Errors
• All CUDA API calls return an error code (cudaError_t)
– Error in the API call itself
OR
– Error in an earlier asynchronous operation (e.g. kernel)
• Get the error code for the last error:
cudaError_t cudaGetLastError(void)
• Get a string to describe the error:
char *cudaGetErrorString(cudaError_t)
printf("%sn", cudaGetErrorString(cudaGetLastError()));
© NVIDIA 2013
Device Management
• Application can query and select GPUs
cudaGetDeviceCount(int *count)
cudaSetDevice(int device)
cudaGetDevice(int *device)
cudaGetDeviceProperties(cudaDeviceProp *prop, int device)
• Multiple threads can share a device
• A single thread can manage multiple devices
cudaSetDevice(i) to select current device
cudaMemcpy(…) for peer-to-peer copies✝
✝ requires OS and device support
© NVIDIA 2013
Introduction to CUDA C/C++
• What have we learned?
– Write and launch CUDA C/C++ kernels
• __global__, blockIdx.x, threadIdx.x, <<<>>>
– Manage GPU memory
• cudaMalloc(), cudaMemcpy(), cudaFree()
– Manage communication and synchronization
• __shared__, __syncthreads()
• cudaMemcpy() vs cudaMemcpyAsync(),
cudaDeviceSynchronize()
© NVIDIA 2013
Compute Capability
• The compute capability of a device describes its architecture, e.g.
– Number of registers
– Sizes of memories
– Features & capabilities
• The following presentations concentrate on Fermi devices
– Compute Capability >= 2.0
Compute
Capability
Selected Features
(see CUDA C Programming Guide for complete list)
Tesla models
1.0 Fundamental CUDA support 870
1.3 Double precision, improved memory accesses,
atomics
10-series
2.0 Caches, fused multiply-add, 3D grids, surfaces, ECC,
P2P,
concurrent kernels/copies, function pointers,
recursion
20-series
© NVIDIA 2013
IDs and Dimensions
– A kernel is launched as a
grid of blocks of threads
• blockIdx and
threadIdx are 3D
• We showed only one
dimension (x)
• Built-in variables:
– threadIdx
– blockIdx
– blockDim
– gridDim
Device
Grid 1
Bloc
k
(0,0,
0)
Bloc
k
(1,0,
0)
Bloc
k
(2,0,
0)
Bloc
k
(1,1,
0)
Bloc
k
(2,1,
0)
Bloc
k
(0,1,
0)
Block (1,1,0)
Thre
ad
(0,0,
0)
Thre
ad
(1,0,
0)
Thre
ad
(2,0,
0)
Thre
ad
(3,0,
0)
Thre
ad
(4,0,
0)
Thre
ad
(0,1,
0)
Thre
ad
(1,1,
0)
Thre
ad
(2,1,
0)
Thre
ad
(3,1,
0)
Thre
ad
(4,1,
0)
Thre
ad
(0,2,
0)
Thre
ad
(1,2,
0)
Thre
ad
(2,2,
0)
Thre
ad
(3,2,
0)
Thre
ad
(4,2,
0)
© NVIDIA 2013
Textures
• Read-only object
– Dedicated cache
• Dedicated filtering hardware
(Linear, bilinear, trilinear)
• Addressable as 1D, 2D or 3D
• Out-of-bounds address handling
(Wrap, clamp)
0 1 2 3
0
1
2
4
(2.5, 0.5)
(1.0, 1.0)
© NVIDIA 2013
Topics we skipped
• We skipped some details, you can learn more:
– CUDA Programming Guide
– CUDA Zone – tools, training, webinars and more
developer.nvidia.com/cuda
• Need a quick primer for later:
– Multi-dimensional indexing
– Textures
© NVIDIA 2013
Find Largest element of an array
© NVIDIA 2013
© NVIDIA 2013
introduction to CUDA_C.pptx it is widely used

More Related Content

PPTX
Introduction_to_CUDA_C_simple et parfiat.pptx
PDF
Tema3_Introduction_to_CUDA_C.pdf
PDF
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
PDF
Introduction to CUDA C: NVIDIA : Notes
PDF
Cuda introduction
PDF
CUDA Tutorial 01 : Say Hello to CUDA : Notes
PDF
Deep Learning Edge
PDF
Kato Mivule: An Overview of CUDA for High Performance Computing
Introduction_to_CUDA_C_simple et parfiat.pptx
Tema3_Introduction_to_CUDA_C.pdf
Etude éducatif sur les GPUs & CPUs et les architectures paralleles -Programmi...
Introduction to CUDA C: NVIDIA : Notes
Cuda introduction
CUDA Tutorial 01 : Say Hello to CUDA : Notes
Deep Learning Edge
Kato Mivule: An Overview of CUDA for High Performance Computing

Similar to introduction to CUDA_C.pptx it is widely used (20)

PPT
Intro2 Cuda Moayad
PPT
Cuda intro
PDF
NVIDIA cuda programming, open source and AI
PPT
Vpu technology &gpgpu computing
PPT
Vpu technology &gpgpu computing
PPT
Vpu technology &gpgpu computing
PPT
Vpu technology &gpgpu computing
PDF
[02][cuda c 프로그래밍 소개] gateau intro to_cuda_c
PDF
Introduction to cuda geek camp singapore 2011
PDF
CUDA lab's slides of "parallel programming" course
PPTX
Using Docker for GPU Accelerated Applications
PPT
Lecture 04
PDF
GPU programming and Its Case Study
PPTX
GPU in Computer Science advance topic .pptx
PPTX
Intro to GPGPU with CUDA (DevLink)
PDF
Using Docker for GPU-accelerated Applications by Felix Abecassis and Jonathan...
PDF
Introduction to CUDA
PPTX
Introduction to Accelerators
PDF
Using GPUs for parallel processing
PDF
Using GPUs to handle Big Data with Java by Adam Roberts.
Intro2 Cuda Moayad
Cuda intro
NVIDIA cuda programming, open source and AI
Vpu technology &gpgpu computing
Vpu technology &gpgpu computing
Vpu technology &gpgpu computing
Vpu technology &gpgpu computing
[02][cuda c 프로그래밍 소개] gateau intro to_cuda_c
Introduction to cuda geek camp singapore 2011
CUDA lab's slides of "parallel programming" course
Using Docker for GPU Accelerated Applications
Lecture 04
GPU programming and Its Case Study
GPU in Computer Science advance topic .pptx
Intro to GPGPU with CUDA (DevLink)
Using Docker for GPU-accelerated Applications by Felix Abecassis and Jonathan...
Introduction to CUDA
Introduction to Accelerators
Using GPUs for parallel processing
Using GPUs to handle Big Data with Java by Adam Roberts.
Ad

Recently uploaded (20)

PDF
Digital Logic Computer Design lecture notes
PPTX
UNIT 4 Total Quality Management .pptx
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPTX
Construction Project Organization Group 2.pptx
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
OOP with Java - Java Introduction (Basics)
Digital Logic Computer Design lecture notes
UNIT 4 Total Quality Management .pptx
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
CYBER-CRIMES AND SECURITY A guide to understanding
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
Automation-in-Manufacturing-Chapter-Introduction.pdf
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
Foundation to blockchain - A guide to Blockchain Tech
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Model Code of Practice - Construction Work - 21102022 .pdf
Construction Project Organization Group 2.pptx
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
OOP with Java - Java Introduction (Basics)
Ad

introduction to CUDA_C.pptx it is widely used

  • 1. CUDA C/C++ BASICS NVIDIA Corporation © NVIDIA 2013
  • 2. What is CUDA? • CUDA Architecture – Expose GPU parallelism for general-purpose computing – Retain performance • CUDA C/C++ – Based on industry-standard C/C++ – Small set of extensions to enable heterogeneous programming – Straightforward APIs to manage devices, memory etc. • This session introduces CUDA C/C++ © NVIDIA 2013
  • 3. Introduction to CUDA C/C++ • What will you learn in this session? – Start from “Hello World!” – Write and launch CUDA C/C++ kernels – Manage GPU memory – Manage communication and synchronization © NVIDIA 2013
  • 4. Prerequisites • You (probably) need experience with C or C++ • You don’t need GPU experience • You don’t need parallel programming experience • You don’t need graphics experience © NVIDIA 2013
  • 5. Heterogeneous Computing Blocks Threads Indexing Shared memory __syncthreads() Asynchronous operation Handling errors Managing devices CONCEPTS © NVIDIA 2013
  • 6. HELLO WORLD! Heterogeneous Computing Blocks Threads Indexing Shared memory __syncthreads() Asynchronous operation Handling errors Managing devices CONCEPTS
  • 7. Heterogeneous Computing  Terminology:  Host The CPU and its memory (host memory)  Device The GPU and its memory (device memory) Host Device © NVIDIA 2013
  • 8. Heterogeneous Computing #include <iostream> #include <algorithm> using namespace std; #define N 1024 #define RADIUS 3 #define BLOCK_SIZE 16 __global__ void stencil_1d(int *in, int *out) { __shared__ int temp[BLOCK_SIZE + 2 * RADIUS]; int gindex = threadIdx.x + blockIdx.x * blockDim.x; int lindex = threadIdx.x + RADIUS; // Read input elements into shared memory temp[lindex] = in[gindex]; if (threadIdx.x < RADIUS) { temp[lindex - RADIUS] = in[gindex - RADIUS]; temp[lindex + BLOCK_SIZE] = in[gindex + BLOCK_SIZE]; } // Synchronize (ensure all the data is available) __syncthreads(); // Apply the stencil int result = 0; for (int offset = -RADIUS ; offset <= RADIUS ; offset++) result += temp[lindex + offset]; // Store the result out[gindex] = result; } void fill_ints(int *x, int n) { fill_n(x, n, 1); } int main(void) { int *in, *out; // host copies of a, b, c int *d_in, *d_out; // device copies of a, b, c int size = (N + 2*RADIUS) * sizeof(int); // Alloc space for host copies and setup values in = (int *)malloc(size); fill_ints(in, N + 2*RADIUS); out = (int *)malloc(size); fill_ints(out, N + 2*RADIUS); // Alloc space for device copies cudaMalloc((void **)&d_in, size); cudaMalloc((void **)&d_out, size); // Copy to device cudaMemcpy(d_in, in, size, cudaMemcpyHostToDevice); cudaMemcpy(d_out, out, size, cudaMemcpyHostToDevice); // Launch stencil_1d() kernel on GPU stencil_1d<<<N/BLOCK_SIZE,BLOCK_SIZE>>>(d_in + RADIUS, d_out + RADIUS); // Copy result back to host cudaMemcpy(out, d_out, size, cudaMemcpyDeviceToHost); // Cleanup free(in); free(out); cudaFree(d_in); cudaFree(d_out); return 0; } serial code parallel code serial code parallel fn © NVIDIA 2013
  • 9. Simple Processing Flow 1. Copy input data from CPU memory to GPU memory PCI Bus © NVIDIA 2013
  • 10. Simple Processing Flow 1. Copy input data from CPU memory to GPU memory 2. Load GPU program and execute, caching data on chip for performance © NVIDIA 2013 PCI Bus
  • 11. Simple Processing Flow 1. Copy input data from CPU memory to GPU memory 2. Load GPU program and execute, caching data on chip for performance 3. Copy results from GPU memory to CPU memory © NVIDIA 2013 PCI Bus
  • 12. Hello World! int main(void) { printf("Hello World!n"); return 0; } Standard C that runs on the host NVIDIA compiler (nvcc) can be used to compile programs with no device code Output: $ nvcc hello_world. cu $ a.out Hello World! $ © NVIDIA 2013
  • 13. Hello World! with Device Code __global__ void mykernel(void) { } int main(void) { mykernel<<<1,1>>>(); printf("Hello World!n"); return 0; }  Two new syntactic elements… © NVIDIA 2013
  • 14. Hello World! with Device Code __global__ void mykernel(void) { } • CUDA C/C++ keyword __global__ indicates a function that: – Runs on the device – Is called from host code • nvcc separates source code into host and device components – Device functions (e.g. mykernel()) processed by NVIDIA compiler – Host functions (e.g. main()) processed by standard host compiler • gcc, cl.exe © NVIDIA 2013
  • 15. Hello World! with Device COde mykernel<<<1,1>>>(); • Triple angle brackets mark a call from host code to device code – Also called a “kernel launch” – We’ll return to the parameters (1,1) in a moment • That’s all that is required to execute a function on the GPU! © NVIDIA 2013
  • 16. Hello World! with Device Code __global__ void mykernel(void){ } int main(void) { mykernel<<<1,1>>>(); printf("Hello World!n"); return 0; } • mykernel() does nothing, somewhat anticlimactic! Output: $ nvcc hello.cu $ a.out Hello World! $ © NVIDIA 2013
  • 17. Parallel Programming in CUDA C/C++ • But wait… GPU computing is about massive parallelism! • We need a more interesting example… • We’ll start by adding two integers and build up to vector addition a b c © NVIDIA 2013
  • 18. Addition on the Device • A simple kernel to add two integers __global__ void add(int *a, int *b, int *c) { *c = *a + *b; } • As before __global__ is a CUDA C/C++ keyword meaning – add() will execute on the device – add() will be called from the host © NVIDIA 2013
  • 19. Addition on the Device • Note that we use pointers for the variables __global__ void add(int *a, int *b, int *c) { *c = *a + *b; } • add() runs on the device, so a, b and c must point to device memory • We need to allocate memory on the GPU © NVIDIA 2013
  • 20. Memory Management • Host and device memory are separate entities – Device pointers point to GPU memory May be passed to/from host code May not be dereferenced in host code – Host pointers point to CPU memory May be passed to/from device code May not be dereferenced in device code • Simple CUDA API for handling device memory – cudaMalloc(), cudaFree(), cudaMemcpy() – Similar to the C equivalents malloc(), free(), memcpy() © NVIDIA 2013
  • 21. Addition on the Device: add() • Returning to our add() kernel __global__ void add(int *a, int *b, int *c) { *c = *a + *b; } • Let’s take a look at main()… © NVIDIA 2013
  • 22. Addition on the Device: main() int main(void) { int a, b, c; // host copies of a, b, c int *d_a, *d_b, *d_c; // device copies of a, b, c int size = sizeof(int); // Allocate space for device copies of a, b, c cudaMalloc((void **)&d_a, size); cudaMalloc((void **)&d_b, size); cudaMalloc((void **)&d_c, size); // Setup input values a = 2; b = 7; © NVIDIA 2013
  • 23. Addition on the Device: main() // Copy inputs to device cudaMemcpy(d_a, &a, size, cudaMemcpyHostToDevice); cudaMemcpy(d_b, &b, size, cudaMemcpyHostToDevice); // Launch add() kernel on GPU add<<<1,1>>>(d_a, d_b, d_c); // Copy result back to host cudaMemcpy(&c, d_c, size, cudaMemcpyDeviceToHost); // Cleanup cudaFree(d_a); cudaFree(d_b); cudaFree(d_c); return 0; } © NVIDIA 2013
  • 24. RUNNING IN PARALLEL Heterogeneous Computing Blocks Threads Indexing Shared memory __syncthreads() Asynchronous operation Handling errors Managing devices CONCEPTS © NVIDIA 2013
  • 25. Moving to Parallel • GPU computing is about massive parallelism – So how do we run code in parallel on the device? add<<< 1, 1 >>>(); add<<< N, 1 >>>(); • Instead of executing add() once, execute N times in parallel © NVIDIA 2013
  • 27. Grid
  • 28. Vector Addition on the Device • With add() running in parallel we can do vector addition • Terminology: each parallel invocation of add() is referred to as a block – The set of blocks is referred to as a grid – Each invocation can refer to its block index using blockIdx.x __global__ void add(int *a, int *b, int *c) { c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x]; } • By using blockIdx.x to index into the array, each block handles a different index © NVIDIA 2013
  • 29. Vector Addition on the Device __global__ void add(int *a, int *b, int *c) { c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x]; } • On the device, each block can execute in parallel: c[0] = a[0] + b[0]; c[1] = a[1] + b[1]; c[2] = a[2] + b[2]; c[3] = a[3] + b[3]; Block 0 Block 1 Block 2 Block 3 © NVIDIA 2013
  • 30. Declaring Variables __global__ declares kernel, which is called on host and executed on device __device__ declares device function, which is called and executed on device __host__ declares host function, which is called and executed on host dim3 gridDim dimensions of grid dim3 blockDim dimensions of block uint3 blockIdx block index within grid uint3 threadIdx thread index within block int warpSize number of threads in warp
  • 31. Vector Addition on the Device: add() • Returning to our parallelized add() kernel __global__ void add(int *a, int *b, int *c) { c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x]; } • Let’s take a look at main()… © NVIDIA 2013
  • 32. Vector Addition on the Device: main() #define N 512 int main(void) { int *a, *b, *c; // host copies of a, b, c int *d_a, *d_b, *d_c; // device copies of a, b, c int size = N * sizeof(int); // Alloc space for device copies of a, b, c cudaMalloc((void **)&d_a, size); cudaMalloc((void **)&d_b, size); cudaMalloc((void **)&d_c, size); // Alloc space for host copies of a, b, c and setup input values a = (int *)malloc(size); random_ints(a, N); b = (int *)malloc(size); random_ints(b, N); c = (int *)malloc(size); © NVIDIA 2013
  • 33. Vector Addition on the Device: main() // Copy inputs to device cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice); cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice); // Launch add() kernel on GPU with N blocks add<<<N,1>>>(d_a, d_b, d_c); // Copy result back to host cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost); // Cleanup free(a); free(b); free(c); cudaFree(d_a); cudaFree(d_b); cudaFree(d_c); return 0; } © NVIDIA 2013
  • 34. Review (1 of 2) • Difference between host and device – Host CPU – Device GPU • Using __global__ to declare a function as device code – Executes on the device – Called from the host • Passing parameters from host code to a device function © NVIDIA 2013
  • 35. Review (2 of 2) • Basic device memory management – cudaMalloc() – cudaMemcpy() – cudaFree() • Launching parallel kernels – Launch N copies of add() with add<<<N,1>>>(…); – Use blockIdx.x to access block index © NVIDIA 2013
  • 37. CUDA Threads • Terminology: a block can be split into parallel threads • Let’s change add() to use parallel threads instead of parallel blocks • We use threadIdx.x instead of blockIdx.x • Need to make one change in main()… __global__ void add(int *a, int *b, int *c) { c[threadIdx.x] = a[threadIdx.x] + b[threadIdx.x]; } © NVIDIA 2013
  • 38. Vector Addition Using Threads: main() #define N 512 int main(void) { int *a, *b, *c; // host copies of a, b, c int *d_a, *d_b, *d_c; // device copies of a, b, c int size = N * sizeof(int); // Alloc space for device copies of a, b, c cudaMalloc((void **)&d_a, size); cudaMalloc((void **)&d_b, size); cudaMalloc((void **)&d_c, size); // Alloc space for host copies of a, b, c and setup input values a = (int *)malloc(size); random_ints(a, N); b = (int *)malloc(size); random_ints(b, N); c = (int *)malloc(size); © NVIDIA 2013
  • 39. Vector Addition Using Threads: main() // Copy inputs to device cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice); cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice); // Launch add() kernel on GPU with N threads add<<<1,N>>>(d_a, d_b, d_c); // Copy result back to host cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost); // Cleanup free(a); free(b); free(c); cudaFree(d_a); cudaFree(d_b); cudaFree(d_c); return 0; } © NVIDIA 2013
  • 40. COMBINING THREADS AND BLOCKS Heterogeneous Computing Blocks Threads Indexing Shared memory __syncthreads() Asynchronous operation Handling errors Managing devices CONCEPTS © NVIDIA 2013
  • 41. Combining Blocks and Threads • We’ve seen parallel vector addition using: – Many blocks with one thread each – One block with many threads • Let’s adapt vector addition to use both blocks and threads • Why? We’ll come to that… • First let’s discuss data indexing… © NVIDIA 2013
  • 42. 0 1 7 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 Indexing Arrays with Blocks and Threads • With M threads/block a unique index for each thread is given by: int index = threadIdx.x + blockIdx.x * M; • No longer as simple as using blockIdx.x and threadIdx.x – Consider indexing an array with one element per thread (8 threads/block) threadIdx.x threadIdx.x threadIdx.x threadIdx.x blockIdx.x = 0 blockIdx.x = 1 blockIdx.x = 2 blockIdx.x = 3 © NVIDIA 2013
  • 43. Indexing Arrays: Example • Which thread will operate on the red element? int index = threadIdx.x + blockIdx.x * M; = 5 + 2 * 8; = 21; 0 1 7 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 threadIdx.x = 5 blockIdx.x = 2 0 1 31 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 M = 8 © NVIDIA 2013
  • 44. Vector Addition with Blocks and Threads • What changes need to be made in main()? • Use the built-in variable blockDim.x for threads per block int index = threadIdx.x + blockIdx.x * blockDim.x; • Combined version of add() to use parallel threads and parallel blocks __global__ void add(int *a, int *b, int *c) { int index = threadIdx.x + blockIdx.x * blockDim.x; c[index] = a[index] + b[index]; } © NVIDIA 2013
  • 45. Addition with Blocks and Threads: main() #define N (2048*2048) #define THREADS_PER_BLOCK 512 int main(void) { int *a, *b, *c; // host copies of a, b, c int *d_a, *d_b, *d_c; // device copies of a, b, c int size = N * sizeof(int); // Alloc space for device copies of a, b, c cudaMalloc((void **)&d_a, size); cudaMalloc((void **)&d_b, size); cudaMalloc((void **)&d_c, size); // Alloc space for host copies of a, b, c and setup input values a = (int *)malloc(size); random_ints(a, N); b = (int *)malloc(size); random_ints(b, N); c = (int *)malloc(size); © NVIDIA 2013
  • 46. Addition with Blocks and Threads: main() // Copy inputs to device cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice); cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice); // Launch add() kernel on GPU add<<<N/THREADS_PER_BLOCK,THREADS_PER_BLOCK>>>(d_a, d_b, d_c); // Copy result back to host cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost); // Cleanup free(a); free(b); free(c); cudaFree(d_a); cudaFree(d_b); cudaFree(d_c); return 0; } © NVIDIA 2013
  • 47. Handling Arbitrary Vector Sizes • Update the kernel launch: add<<<(N + M-1) / M,M>>>(d_a, d_b, d_c, N); • Typical problems are not friendly multiples of blockDim.x • Avoid accessing beyond the end of the arrays: __global__ void add(int *a, int *b, int *c, int n) { int index = threadIdx.x + blockIdx.x * blockDim.x; if (index < n) c[index] = a[index] + b[index]; } © NVIDIA 2013
  • 48. Why Bother with Threads? • Threads seem unnecessary – They add a level of complexity – What do we gain? • Unlike parallel blocks, threads have mechanisms to: – Communicate – Synchronize • To look closer, we need a new example… © NVIDIA 2013
  • 50. Sharing Data Between Threads • Terminology: within a block, threads share data via shared memory • Extremely fast on-chip memory, user-managed • Declare using __shared__, allocated per block • Data is not visible to threads in other blocks © NVIDIA 2013
  • 51. __syncthreads() • void __syncthreads(); • Synchronizes all threads within a block – Used to prevent RAW / WAR / WAW hazards • All threads must reach the barrier – In conditional code, the condition must be uniform across the block © NVIDIA 2013
  • 52. Review (1 of 2) • Launching parallel threads – Launch N blocks with M threads per block with kernel<<<N,M>>>(…); – Use blockIdx.x to access block index within grid – Use threadIdx.x to access thread index within block • Allocate elements to threads: int index = threadIdx.x + blockIdx.x * blockDim.x; © NVIDIA 2013
  • 53. Review (2 of 2) • Use __shared__ to declare a variable/array in shared memory – Data is shared between threads in a block – Not visible to threads in other blocks • Use __syncthreads() as a barrier – Use to prevent data hazards © NVIDIA 2013
  • 54. MANAGING THE DEVICE Heterogeneous Computing Blocks Threads Indexing Shared memory __syncthreads() Asynchronous operation Handling errors Managing devices CONCEPTS © NVIDIA 2013
  • 55. Coordinating Host & Device • Kernel launches are asynchronous – Control returns to the CPU immediately • CPU needs to synchronize before consuming the results cudaMemcpy() Blocks the CPU until the copy is complete Copy begins when all preceding CUDA calls have completed cudaMemcpyAsync() Asynchronous, does not block the CPU cudaDeviceSynchro nize() Blocks the CPU until all preceding CUDA calls have completed © NVIDIA 2013
  • 56. Reporting Errors • All CUDA API calls return an error code (cudaError_t) – Error in the API call itself OR – Error in an earlier asynchronous operation (e.g. kernel) • Get the error code for the last error: cudaError_t cudaGetLastError(void) • Get a string to describe the error: char *cudaGetErrorString(cudaError_t) printf("%sn", cudaGetErrorString(cudaGetLastError())); © NVIDIA 2013
  • 57. Device Management • Application can query and select GPUs cudaGetDeviceCount(int *count) cudaSetDevice(int device) cudaGetDevice(int *device) cudaGetDeviceProperties(cudaDeviceProp *prop, int device) • Multiple threads can share a device • A single thread can manage multiple devices cudaSetDevice(i) to select current device cudaMemcpy(…) for peer-to-peer copies✝ ✝ requires OS and device support © NVIDIA 2013
  • 58. Introduction to CUDA C/C++ • What have we learned? – Write and launch CUDA C/C++ kernels • __global__, blockIdx.x, threadIdx.x, <<<>>> – Manage GPU memory • cudaMalloc(), cudaMemcpy(), cudaFree() – Manage communication and synchronization • __shared__, __syncthreads() • cudaMemcpy() vs cudaMemcpyAsync(), cudaDeviceSynchronize() © NVIDIA 2013
  • 59. Compute Capability • The compute capability of a device describes its architecture, e.g. – Number of registers – Sizes of memories – Features & capabilities • The following presentations concentrate on Fermi devices – Compute Capability >= 2.0 Compute Capability Selected Features (see CUDA C Programming Guide for complete list) Tesla models 1.0 Fundamental CUDA support 870 1.3 Double precision, improved memory accesses, atomics 10-series 2.0 Caches, fused multiply-add, 3D grids, surfaces, ECC, P2P, concurrent kernels/copies, function pointers, recursion 20-series © NVIDIA 2013
  • 60. IDs and Dimensions – A kernel is launched as a grid of blocks of threads • blockIdx and threadIdx are 3D • We showed only one dimension (x) • Built-in variables: – threadIdx – blockIdx – blockDim – gridDim Device Grid 1 Bloc k (0,0, 0) Bloc k (1,0, 0) Bloc k (2,0, 0) Bloc k (1,1, 0) Bloc k (2,1, 0) Bloc k (0,1, 0) Block (1,1,0) Thre ad (0,0, 0) Thre ad (1,0, 0) Thre ad (2,0, 0) Thre ad (3,0, 0) Thre ad (4,0, 0) Thre ad (0,1, 0) Thre ad (1,1, 0) Thre ad (2,1, 0) Thre ad (3,1, 0) Thre ad (4,1, 0) Thre ad (0,2, 0) Thre ad (1,2, 0) Thre ad (2,2, 0) Thre ad (3,2, 0) Thre ad (4,2, 0) © NVIDIA 2013
  • 61. Textures • Read-only object – Dedicated cache • Dedicated filtering hardware (Linear, bilinear, trilinear) • Addressable as 1D, 2D or 3D • Out-of-bounds address handling (Wrap, clamp) 0 1 2 3 0 1 2 4 (2.5, 0.5) (1.0, 1.0) © NVIDIA 2013
  • 62. Topics we skipped • We skipped some details, you can learn more: – CUDA Programming Guide – CUDA Zone – tools, training, webinars and more developer.nvidia.com/cuda • Need a quick primer for later: – Multi-dimensional indexing – Textures © NVIDIA 2013
  • 63. Find Largest element of an array © NVIDIA 2013

Editor's Notes

  • #43: Identical to finding offset in 1-dimensional storage of a 2-dimensional matrix: int index = x + width * y;