Developer Tools

trunk/84d66a8f0ee61ad69f2c080c5735f18c9065e3cd: [test] Add error_inputs for nn.Embedding module (#174180)

New tests catch out-of-range indices and float inputs before they crash your models.

Deep Dive

PyTorch contributor subinz1 has implemented a new testing framework for the nn.Embedding module in pull request #174180. The update addresses a gap where the popular neural network component lacked proper error input testing, which could lead to confusing debugging experiences when models failed. The new tests specifically validate error messages for three common misuse scenarios: when indices exceed the specified vocabulary size, when float tensors are incorrectly passed instead of integer types, and when negative dimensions are specified during module construction.

This enhancement brings nn.Embedding in line with other core PyTorch modules like BatchNorm, GroupNorm, and RNN cells, which already had established error testing patterns. The implementation includes careful consideration of hardware differences—notably, the out-of-range index test only runs on CPU since CUDA triggers kernel assertions rather than Python exceptions. The tests were validated on modern H200 GPUs with CUDA 12.8, ensuring compatibility with current deep learning infrastructure. This change represents a subtle but important improvement to PyTorch's robustness, catching common programming errors earlier in the development cycle.

Key Points
  • Adds error_inputs testing for PyTorch's nn.Embedding module in PR #174180
  • Tests three specific error cases: out-of-range indices, float inputs, and negative dimensions
  • Follows established testing patterns from other modules like BatchNorm and RNN cells

Why It Matters

Better error testing means faster debugging and more reliable model development for ML engineers.