trunk/93352e1812267a9482dd4b0d2a1c29f77eed3f85: [BE] Remove pre-MacOS14 check from MpsDeviceInterface (#175804)
A single-line code change now gives all M1/M2/M3 Macs access to faster, more efficient AI training.
The PyTorch development team has merged a seemingly minor but significant code change (pull request #175804) that removes a restrictive version check from its Metal Performance Shaders (MPS) device interface. Previously, PyTorch's backend for Apple Silicon required macOS 14 (Sonoma) or later to utilize the BFloat16 (Brain Floating Point 16) data type. The commit, authored by 'malfet', eliminates this 'pre-MacOS14 check,' officially declaring that 'BFloat16 is supported on all currently supported Apple Silicon devices.' This change effectively backports a key performance feature to millions of Macs running macOS 13 (Ventura).
The technical implication is substantial: BFloat16 is a 16-bit floating-point format that offers a good balance between precision and range, similar to the FP32 standard but using half the memory. It is crucial for accelerating machine learning workloads, especially training large models, by reducing memory bandwidth pressure and computation time. By unlocking BFloat16 on macOS 13, PyTorch instantly grants developers and researchers using M1, M2, and M3 Macs access to more efficient model training and inference locally. This lowers the barrier to entry for on-device AI development and aligns with Apple's push for its neural engine and GPU ecosystem, making Macs more competitive as AI development platforms without requiring a full OS upgrade.
- PyTorch commit #175804 removes a macOS version gate, enabling BFloat16 on macOS 13.
- BFloat16 support is now available for all Apple Silicon Macs (M1/M2/M3), cutting memory use in half.
- Enables faster, more memory-efficient AI model training and inference directly on compatible Mac hardware.
Why It Matters
Unlocks professional-grade AI training performance for developers on older macOS versions, expanding the accessible hardware base for on-device ML.