trunk/356d8001d8cc4bb1e78c1e14d7eea2039d31c1c5: [BE]: Update cudnn_frontend submodule to 1.23.0 (#182077)
A subtle hang in unit tests is fixed by bumping a submodule version.
PyTorch has merged PR #182077, updating its cudnn_frontend submodule from version 1.22.0 to 1.23.0. The move addresses a subtle bug that caused hangs during unit tests, particularly test_transformers.py, when using cuDNN 9.21.1 or 9.21.0. The bug was present in the previous frontend version 1.22.0, which was already integrated into PyTorch's main branch. Advanced users building PyTorch from source with the latest cuDNN libraries would encounter these hangs, disrupting automated testing and development workflows. The fix ensures that the frontend layer, which abstracts NVIDIA's cuDNN deep neural network primitives, works reliably with the newest GPU acceleration libraries.
This update is a backend maintenance commit labeled [BE] (build/engineering), but it carries significant implications for developers and researchers who compile PyTorch from source to get the fastest performance on NVIDIA GPUs. The PR was authored using Cursor and approved by PyTorch core contributors including @eqy and @Skylion007. By aligning with cuDNN 9.21.1 compatibility, PyTorch avoids regressions in transformer model training and inference, which are common workloads in AI. The change is small in scope (a submodule version bump) but critical for ensuring stable operation under heavy GPU utilization.
- PyTorch updated cudnn_frontend to 1.23.0 to fix a hang bug in unit tests with cuDNN 9.21.1.
- The bug was present in cudnn_frontend 1.22.0 and affected advanced users building from source.
- The fix ensures stable execution of transformer unit tests and other cuDNN-dependent workloads.
Why It Matters
Ensures reliable GPU-accelerated training for advanced PyTorch users using the latest cuDNN releases.