trunk/774cc8d20fb976289af9e5b4cef8b3d460c615ab: Fix unbacked scalar repeat_interleave guards (#177305)
A subtle bug in PyTorch's repeat_interleave operator was preventing AI models from compiling with dynamic shapes.
The PyTorch team, led by developer @bobrenjc93 with assistance from Codex, has resolved a significant compilation bug (#177305) affecting the framework's `repeat_interleave` operator. The issue specifically impacted the scalar overload of the function, which was incorrectly trying to concretize unbacked 0-dimensional symbolic integers (SymInts) during validation. This occurred because the operator used `TORCH_CHECK(repeats >= 0)` and forced `(repeats * size).guard_int()` in an eager-only execution path, causing Dynamo and Inductor—PyTorch's just-in-time compilers—to hit early guards and fail compilation even when the tensor overload handled the same constraints symbolically.
The fix replaces these problematic validations with symbolic checks using `TORCH_SYM_CHECK`, including a symbolic equality check for `output_size`. This approach maintains runtime validation semantics for invalid inputs while allowing legitimate unbacked symbolic values to flow through the compilation pipeline. The solution also includes an Inductor regression test that ensures `x.repeat_interleave(repeats.item())` now compiles correctly with captured scalar outputs and dynamic output shapes.
This fix represents the proper long-term solution because it addresses the bug at the operator implementation level rather than applying a compiler-only workaround. It ensures consistency between the scalar and tensor overloads of `repeat_interleave` and preserves the framework's ability to handle dynamic shapes—a critical requirement for modern AI model development and deployment where input dimensions often vary during inference.
- Bug #177305 prevented compilation of models using `repeat_interleave` with scalar inputs and dynamic shapes
- Fixed by replacing eager-only validation with symbolic checks using `TORCH_SYM_CHECK`
- Maintains runtime validation while allowing unbacked symbolic values through compilation
Why It Matters
This fix enables AI developers to compile models with dynamic shapes using common PyTorch operations, removing a significant barrier to efficient model deployment.