trunk/57d969a9aa41cbb5dd6ec74fd4b7d859da570ebf: Fix: add missing compute_unbacked_binding [HF torchbench] (#174002)
A subtle bug was silently breaking training for major AI models like M2M100.
PyTorch developers have patched a critical bug in the Dynamo/Inductor compiler that caused 'PendingUnbackedSymbolNotFound' errors during training. The issue specifically affected large transformer models like M2M100ForConditionalGeneration when using graph breaks. The fix adds a missing call to `compute_unbacked_binding` to properly handle unbacked symbols in view inputs, preventing crashes in realistic, larger-scale training scenarios that weren't caught by simpler unit tests.
Why It Matters
This fix stabilizes training for cutting-edge, large language models that rely on PyTorch's performance compiler, preventing hidden failures.