Course on CUDA Programming on NVIDIA GPUs, March 18-22, 2024, at UT Austin
This is a 1-week hands-on course for students, postdocs, academics and
others who want to learn how to develop applications to run on NVIDIA
GPUs using the CUDA programming environment. All that will be assumed
is some proficiency with C and basic C++ programming. No prior experience
with parallel computing will be assumed.
The course consists of approximately 3 hours of lectures and 4 hours
of practicals each day. The aim is that by the end of the course you
will be able to write relatively simple programs and will be confident
and able to continue learning through studying the examples provided
by NVIDIA on GitHub.
All attendees should bring a laptop to access the GPUs servers on
TACC.
Venue
The lectures and practicals will all take place in POB Seminar Room 4.304
in the Oden Institute. Attendees should bring fully-charged laptops
for carrying out the practicals.
Timetable
For the first three days we will follow this timetable:
- 09:15 - 10:45 lecture
- 10:45 - 11:15 break
- 11:15 - 12:45 practical
- 12:45 - 14:00 lunch break
- 14:00 - 15:30 lecture
- 15:30 - 16:00 break
- 16:00 - 17:30 practical
On the last two days we will switch to having both lectures in the morning,
and then have practicals all afternoon. This provides more time for longer
practicals.
Preliminary Reading
Please read chapters 1 and 2 of the NVIDIA CUDA C Programming Guide
which is available both as
PDF
and
online HTML.
CUDA is an extension of C/C++, so if you are a little rusty with C/C++
you should refresh your memory of it.
Additional References
Lectures
Practicals
We will be working under Linux on GPU nodes which are part of TACC's
Frontera
system.
Before starting the practicals, please read these
notes on using the Frontera system,
and have a look at the online
Frontera User Guide.
Datasheet
for Quadro RTX 5000 GPU which we will be using in our practicals.
The practicals all use these header files
(helper_cuda.h,
helper_string.h)
which came originally from the CUDA SDK. They provide routines for
error-checking and initialisation.
Tar files for all practicals
Practical 1
Application: a trivial "hello world" example
CUDA aspects: launching a kernel, copying data to/from the graphics card,
error checking and printing from kernel code
Note: the instructions explain how files can be copied from my user account
so there's no need to download from here
Practical 2
Application: Monte Carlo simulation using NVIDIA's CURAND library
for random number generation
CUDA aspects: constant memory, random number generation, kernel timing,
minimising device memory bandwidth requirements
Practical 3
Application: 3D Laplace finite difference solver
CUDA aspects: thread block size optimisation, multi-dimensional memory layout,
performance profiling
Practical 4
Application: reduction
CUDA aspects: dynamic shared memory, thread synchronisation
Practical 5
Application: using Tensor Cores and cuBLAS and other libraries
Practical 6
Application: revisiting the simple "hello world" example
CUDA aspects: using g++ for the main code, building libraries,
using templates
Practical 7
Application: tri-diagonal equations
Practical 8
Application: scan operation and recurrence equations
Practical 9
Application: pattern matching
Practical 10
Application: auto-tuning
Practical 11
Application: streams and OpenMP multithreading
Practical 12
Application: more on streams and overlapping computation and communication
Acknowledgements
Many thanks to:
- TACC for the GPU resources
- Dan Stanzione for his presentation
webpage link checker