NVIDIA GPU Programming - Extended Training Course
This instructor-led, live training course covers how to program GPUs for parallel computing, how to use various platforms, how to work with the CUDA platform and its features, and how to perform various optimization techniques using CUDA. Some of the applications include deep learning, analytics, image processing and engineering applications.
This course is available as onsite live training in Costa Rica or online live training.Course Outline
Introduction
Understanding the Fundamentals of Heterogeneous Computing Methodology
Why Parallel Computing? Understanding the Need for Parallel Computing
Multi-Core Processors - Architecture and Design
Introduction to Threads, Thread Basics and Basic Concepts of Parallel Programming
Understanding the Fundamentals of GPU Software Optimization Processes
OpenMP - A Standard for Directive-Based Parallel Programming
Hands on / Demonstration of Various Programs on Multicore Machines
Introduction to GPU Computing
GPUs for Parallel Computing
GPUs Programming Model
Hands on / Demonstration of Various Programs on GPU
SDK, Toolkit and Installation of Environment for GPU
Working with Various Libraries
Demonstration of GPU and Tools with Sample Programs and OpenACC
Understanding the CUDA Programming Model
Learning the CUDA Architecture
Exploring and Setting Up the CUDA Development Environments
Working with the CUDA Runtime API
Understanding the CUDA Memory Model
Exploring Additional CUDA API Features
Accessing Global Memory Efficiently in CUDA: Global Memory Optimization
Optimizing Data Transfers in CUDA Using CUDA Streams
Using Shared Memory in CUDA
Understanding and Using Atomic Operations and Instructions in CUDA
Case Study: Basic Digital Image Processing with CUDA
Working with Multi-GPU Programming
Advanced Hardware Profiling and Sampling on NVIDIA / CUDA
Using CUDA Dynamic Parallelism API for Dynamic Kernel Launch
Summary and Conclusion
Requirements
- C Programming
- Linux GCC
Open Training Courses require 5+ participants.
NVIDIA GPU Programming - Extended Training Course - Booking
NVIDIA GPU Programming - Extended Training Course - Enquiry
NVIDIA GPU Programming - Extended - Consultancy Enquiry
Consultancy Enquiry
Testimonials (1)
Trainers energy and humor.
Tadeusz Kaluba - Nokia Solutions and Networks Sp. z o.o.
Course - NVIDIA GPU Programming - Extended
Upcoming Courses
Related Courses
Developing AI Applications with Huawei Ascend and CANN
21 HoursHuawei Ascend is a family of AI processors designed for high-performance inference and training.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI engineers and data scientists who wish to develop and optimize neural network models using Huawei’s Ascend platform and the CANN toolkit.
By the end of this training, participants will be able to:
- Set up and configure the CANN development environment.
- Develop AI applications using MindSpore and CloudMatrix workflows.
- Optimize performance on Ascend NPUs using custom operators and tiling.
- Deploy models to edge or cloud environments.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of Huawei Ascend and CANN toolkit in sample applications.
- Guided exercises focused on model building, training, and deployment.
Course Customization Options
- To request a customized training for this course based on your infrastructure or datasets, please contact us to arrange.
AI Inference and Deployment with CloudMatrix
21 HoursCloudMatrix is Huawei’s unified AI development and deployment platform designed to support scalable, production-grade inference pipelines.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level AI professionals who wish to deploy and monitor AI models using the CloudMatrix platform with CANN and MindSpore integration.
By the end of this training, participants will be able to:
- Use CloudMatrix for model packaging, deployment, and serving.
- Convert and optimize models for Ascend chipsets.
- Set up pipelines for real-time and batch inference tasks.
- Monitor deployments and tune performance in production settings.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of CloudMatrix with real deployment scenarios.
- Guided exercises focused on conversion, optimization, and scaling.
Course Customization Options
- To request a customized training for this course based on your AI infrastructure or cloud environment, please contact us to arrange.
GPU Programming on Biren AI Accelerators
21 HoursBiren AI Accelerators are high-performance GPUs designed for AI and HPC workloads with support for large-scale training and inference.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level developers who wish to program and optimize applications using Biren’s proprietary GPU stack, with practical comparisons to CUDA-based environments.
By the end of this training, participants will be able to:
- Understand Biren GPU architecture and memory hierarchy.
- Set up the development environment and use Biren’s programming model.
- Translate and optimize CUDA-style code for Biren platforms.
- Apply performance tuning and debugging techniques.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of Biren SDK in sample GPU workloads.
- Guided exercises focused on porting and performance tuning.
Course Customization Options
- To request a customized training for this course based on your application stack or integration needs, please contact us to arrange.
Cambricon MLU Development with BANGPy and Neuware
21 HoursCambricon MLUs (Machine Learning Units) are specialized AI chips optimized for inference and training in edge and datacenter scenarios.
This instructor-led, live training (online or onsite) is aimed at intermediate-level developers who wish to build and deploy AI models using the BANGPy framework and Neuware SDK on Cambricon MLU hardware.
By the end of this training, participants will be able to:
- Set up and configure the BANGPy and Neuware development environments.
- Develop and optimize Python- and C++-based models for Cambricon MLUs.
- Deploy models to edge and data center devices running Neuware runtime.
- Integrate ML workflows with MLU-specific acceleration features.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of BANGPy and Neuware for development and deployment.
- Guided exercises focused on optimization, integration, and testing.
Course Customization Options
- To request a customized training for this course based on your Cambricon device model or use case, please contact us to arrange.
Administration of CUDA
35 HoursThis instructor-led, live training in Costa Rica (online or onsite) is aimed at beginner-level system administrators and IT professionals who wish to install, configure, manage, and troubleshoot CUDA environments.
By the end of this training, participants will be able to:
- Understand the architecture, components, and capabilities of CUDA.
- Install and configure CUDA environments.
- Manage and optimize CUDA resources.
- Debug and troubleshoot common CUDA issues.
GPU Programming with CUDA and Python
14 HoursThis instructor-led, live training in Costa Rica (online or onsite) is aimed at intermediate-level developers who wish to use CUDA to build Python applications that run in parallel on NVIDIA GPUs.
By the end of this training, participants will be able to:
- Use the Numba compiler to accelerate Python applications running on NVIDIA GPUs.
- Create, compile and launch custom CUDA kernels.
- Manage GPU memory.
- Convert a CPU based application into a GPU-accelerated application.
Migrating CUDA Applications to Chinese GPU Architectures
21 HoursChinese GPU architectures such as Huawei Ascend, Biren, and Cambricon MLUs offer CUDA alternatives tailored for local AI and HPC markets.
This instructor-led, live training (online or onsite) is aimed at advanced-level GPU programmers and infrastructure specialists who wish to migrate and optimize existing CUDA applications for deployment on Chinese hardware platforms.
By the end of this training, participants will be able to:
- Evaluate compatibility of existing CUDA workloads with Chinese chip alternatives.
- Port CUDA codebases to Huawei CANN, Biren SDK, and Cambricon BANGPy environments.
- Compare performance and identify optimization points across platforms.
- Address practical challenges in cross-architecture support and deployment.
Format of the Course
- Interactive lecture and discussion.
- Hands-on code translation and performance comparison labs.
- Guided exercises focused on multi-GPU adaptation strategies.
Course Customization Options
- To request a customized training for this course based on your platform or CUDA project, please contact us to arrange.
GPU Programming with CUDA
28 HoursThis instructor-led, live training in Costa Rica (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use CUDA to program NVIDIA GPUs and exploit their parallelism.
By the end of this training, participants will be able to:
- Set up a development environment that includes CUDA Toolkit, a NVIDIA GPU, and Visual Studio Code.
- Create a basic CUDA program that performs vector addition on the GPU and retrieves the results from the GPU memory.
- Use CUDA API to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
- Use CUDA C/C++ language to write kernels that execute on the GPU and manipulate data.
- Use CUDA built-in functions, variables, and libraries to perform common tasks and operations.
- Use CUDA memory spaces, such as global, shared, constant, and local, to optimize data transfers and memory accesses.
- Use CUDA execution model to control the threads, blocks, and grids that define the parallelism.
- Debug and test CUDA programs using tools such as CUDA-GDB, CUDA-MEMCHECK, and NVIDIA Nsight.
- Optimize CUDA programs using techniques such as coalescing, caching, prefetching, and profiling.
97% de clients satisfaits.
GPU Programming with OpenCL
28 HoursThis instructor-led, live training in Costa Rica (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use OpenCL to program heterogeneous devices and exploit their parallelism.
By the end of this training, participants will be able to:
- Set up a development environment that includes OpenCL SDK, a device that supports OpenCL, and Visual Studio Code.
- Create a basic OpenCL program that performs vector addition on the device and retrieves the results from the device memory.
- Use OpenCL API to query device information, create contexts, command queues, buffers, kernels, and events.
- Use OpenCL C language to write kernels that execute on the device and manipulate data.
- Use OpenCL built-in functions, extensions, and libraries to perform common tasks and operations.
- Use OpenCL host and device memory models to optimize data transfers and memory accesses.
- Use OpenCL execution model to control the work-items, work-groups, and ND-ranges.
- Debug and test OpenCL programs using tools such as CodeXL, Intel VTune, and NVIDIA Nsight.
- Optimize OpenCL programs using techniques such as vectorization, loop unrolling, local memory, and profiling.
GPU Programming - OpenCL vs CUDA vs ROCm
28 HoursThis instructor-led, live training in Costa Rica (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use different frameworks for GPU programming and compare their features, performance, and compatibility.
By the end of this training, participants will be able to:
- Set up a development environment that includes OpenCL SDK, CUDA Toolkit, ROCm Platform, a device that supports OpenCL, CUDA, or ROCm, and Visual Studio Code.
- Create a basic GPU program that performs vector addition using OpenCL, CUDA, and ROCm, and compare the syntax, structure, and execution of each framework.
- Use the respective APIs to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
- Use the respective languages to write kernels that execute on the device and manipulate data.
- Use the respective built-in functions, variables, and libraries to perform common tasks and operations.
- Use the respective memory spaces, such as global, local, constant, and private, to optimize data transfers and memory accesses.
- Use the respective execution models to control the threads, blocks, and grids that define the parallelism.
- Debug and test GPU programs using tools such as CodeXL, CUDA-GDB, CUDA-MEMCHECK, and NVIDIA Nsight.
- Optimize GPU programs using techniques such as coalescing, caching, prefetching, and profiling.
Performance Optimization on Ascend, Biren, and Cambricon
21 HoursAscend, Biren, and Cambricon are leading AI hardware platforms in China, each offering unique acceleration and profiling tools for production-scale AI workloads.
This instructor-led, live training (online or onsite) is aimed at advanced-level AI infrastructure and performance engineers who wish to optimize model inference and training workflows across multiple Chinese AI chip platforms.
By the end of this training, participants will be able to:
- Benchmark models on Ascend, Biren, and Cambricon platforms.
- Identify system bottlenecks and memory/compute inefficiencies.
- Apply graph-level, kernel-level, and operator-level optimizations.
- Tune deployment pipelines to improve throughput and latency.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of profiling and optimization tools on each platform.
- Guided exercises focused on practical tuning scenarios.
Course Customization Options
- To request a customized training for this course based on your performance environment or model type, please contact us to arrange.