Research & Papers

Physical Foundation Models: Fixed hardware implementations of large-scale neural networks

Fixed hardware neural nets could cut AI energy costs by orders of magnitude.

Deep Dive

As foundation models like GPT-5, Gemini 3, and Opus 4 grow to ~10^12 parameters and are reused across countless tasks, a new opportunity emerges for hardware engineers: build special-purpose, fixed implementations of these networks. The paper by Wright et al. argues for Physical Foundation Models (PFMs) — hardware where the neural network is realized directly at the physical design level, operating via natural physical dynamics. This radical departure from conventional digital inference could deliver orders-of-magnitude improvements in energy efficiency, speed, and parameter density. For instance, a 10^12-parameter PFM would slash datacenter energy costs and allow AI of that scale to run on power-constrained edge devices. Even larger models — 10^15 or 10^18 parameters — appear achievable.

The authors illustrate PFM scaling with an optical example: a 3D nanostructured glass medium that performs neural network computations through light propagation. They also discuss prospects in nanoelectronics and other physical platforms. However, major research challenges remain, including fabrication precision, reconfigurability, and integration with existing workflows. If these hurdles are overcome, PFMs could reshape AI infrastructure, making massive models both energy-efficient and widely deployable.

Key Points
  • PFMs are fixed hardware implementations of trillion-parameter models like GPT-5, Gemini 3, and Opus 4.
  • Potential for orders-of-magnitude improvements in energy efficiency, speed, and parameter density.
  • Optical example using 3D nanostructured glass; scaling to 10^15–10^18 parameters is plausible.

Why It Matters

If realized, PFMs could drastically cut AI's energy footprint and bring GPT-class intelligence to edge devices.