NVIDIA's parallel computing platform and programming model for general computing on GPUs.
CUDA (Compute Unified Device Architecture) allows developers to write programs that execute on NVIDIA GPUs. It provides both low-level access to GPU hardware and high-level libraries for common operations. CUDA is essential for AI/ML training, scientific computing, and other parallel workloads.
Explore More Terms
Quantum Entanglement
A quantum mechanical phenomenon where particles become correlated so that the quantum state of each particle cannot be described independently.
GitOps
An operational framework that uses Git repositories as the single source of truth for infrastructure and application configurations.
RDMA
Remote Direct Memory Access - a technology that allows direct memory access between computers without involving the CPU.
Kubernetes
An open-source container orchestration platform for automating deployment, scaling, and management of containerized applications.
Tensor Core
Specialized processing units in NVIDIA GPUs designed for matrix operations common in deep learning.
Kernel
The core component of an operating system that manages system resources and provides services to applications.