cuDNN: Efficient Primitives for Deep Learning
About
We present a library of efficient implementations of deep learning primitives. Deep learning workloads are computationally intensive, and optimizing their kernels is difficult and time-consuming. As parallel architectures evolve, kernels must be reoptimized, which makes maintaining codebases difficult over time. Similar issues have long been addressed in the HPC community by libraries such as the Basic Linear Algebra Subroutines (BLAS). However, there is no analogous library for deep learning. Without such a library, researchers implementing deep learning workloads on parallel processors must create and optimize their own implementations of the main computational kernels, and this work must be repeated as new parallel processors emerge. To address this problem, we have created a library similar in intent to BLAS, with optimized routines for deep learning workloads. Our implementation contains routines for GPUs, although similarly to the BLAS library, these routines could be implemented for other platforms. The library is easy to integrate into existing frameworks, and provides optimized performance and memory usage. For example, integrating cuDNN into Caffe, a popular framework for convolutional networks, improves performance by 36% on a standard model while also reducing memory consumption.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Attention Operator Throughput | Llama2 7B (32 Q-heads/32 KV-heads/128 Head-dimension) | Attention TFLOPS201.2 | 30 | |
| Attention Operator Throughput | Qwen2.5 72B (64 Q-heads/8 KV-heads/128 Head-dimension) | Attention Throughput (TFLOPS)207.9 | 29 | |
| Attention Operator Throughput | Llama 405B (128 Q-heads/8 KV-heads/128 Head-dimension) 3.1 | TFLOPS211.2 | 28 | |
| Plant Segmentation | Plant Data (test) | Test Accuracy78.69 | 8 | |
| Semantic segmentation | Semantic Drone (test) | Test Accuracy93.98 | 8 | |
| Semantic segmentation | Plant Data (test) | Accuracy75.64 | 8 | |
| Semantic segmentation | Semantic Drone Dataset (test) | Training Time (s)7.71e+3 | 8 | |
| Masked Multi-Head Attention | T4 GPU Synthetic Performance Benchmark | Performance (TFLOPS)8.11 | 5 | |
| Multi-Head Attention (MHA) | NVIDIA A100 GPU | TFLOPS95.3 | 5 | |
| Attention Operator Performance | MLA Sequence Length 512 128 Head Dimension | TFLOPS35.5 | 4 |