trunk/3646a5df996c7ed344fbaba6b35ecd6164181e48: Centralize FX graph cacheability validation (#180795)
New CacheabilityValidator stops non-cacheable inputs from wasting compute time.
PyTorch has merged a significant commit (PR #180795) that overhauls how its FX graph caching system validates cacheability. The core change introduces a CacheabilityValidator, a centralized authority that determines whether an FX graph can be cached before the hashing process begins. Previously, cacheability decisions were fragmented: some checks happened upfront, while others only occurred during serialization in the pickler reducers. This split made the policy difficult to audit and allowed non-cacheable inputs to waste compute cycles by reaching the cache key construction phase before being rejected.
The fix routes all bypass decisions—including those for custom passes, frozen constants, runtime constant folding, compiler bisector, shape environment, HOP (higher-order operators), torchbind, tensors, and unsupported reducers—through the new validator. The FxGraphCache._check_can_cache method now runs the validator before hashing, preventing wasted work. Pickler fallback paths still use validator helpers for defensive serialization failures, ensuring backward compatibility. This architectural change separates eligibility checks from serialization mechanics, making future cacheability rules easier to add and the overall system more maintainable. The commit was drafted via Codex and manually reviewed by @bobrenjc93, approved by @Lucaskabela.
- Centralizes all FX graph cache bypass decisions into a single CacheabilityValidator, replacing fragmented upfront and serialization-time checks.
- Introduces focused regression tests for upfront MKLDNN constant rejection, improving test coverage for cacheability edge cases.
- Separates eligibility checks from serialization mechanics, making future cacheability rules easier to add and audit.
Why It Matters
Reduces wasted compute in PyTorch's compiler, speeding up machine learning model compilation for developers.