Enhancing Physics-Informed Neural Networks with Domain-aware Fourier Features: Towards Improved Performance and Interpretable Results
New Domain-aware Fourier Features achieve orders-of-magnitude lower errors and faster convergence for physics-informed neural networks.
A research team including Konstantinos E. Tatsis has introduced a breakthrough method called PINN-DaFFs that fundamentally improves how Physics-Informed Neural Networks (PINNs) learn and solve complex physical systems. The core innovation replaces standard Random Fourier Features (RFFs) with Domain-aware Fourier Features (DaFFs) that directly encode the geometry and boundary conditions of the problem domain into the neural network's positional encoding. This architectural shift eliminates the need for separate boundary condition loss terms and complex loss balancing schemes that have traditionally made PINNs difficult and expensive to train. The result is a model that inherently respects the physics of the problem space from the outset, rather than learning it through penalty terms.
The technical implementation demonstrates remarkable performance gains: PINN-DaFFs achieve orders-of-magnitude lower prediction errors and significantly faster convergence compared to both vanilla PINNs and RFF-based PINNs. Beyond raw accuracy, the researchers developed a Layer-wise Relevance Propagation (LRP) framework specifically tailored to PINNs, revealing that DaFFs produce feature attributions that align more closely with actual physical principles, whereas previous methods showed scattered, less interpretable patterns. This combination of improved efficiency, accuracy, and explainability addresses three major pain points in scientific machine learning simultaneously. The work lays groundwork for more robust physics-informed learning systems that could accelerate discovery in fields like computational fluid dynamics, materials science, and climate modeling where interpretability is as crucial as predictive power.
- PINN-DaFFs method eliminates explicit boundary condition loss terms and loss balancing, simplifying optimization
- Achieves orders-of-magnitude lower errors and faster convergence compared to vanilla PINNs and RFF-based PINNs
- New LRP explainability framework shows DaFFs produce more physically consistent feature attributions than previous methods
Why It Matters
Makes physics-based AI models dramatically cheaper to train and more interpretable for scientific discovery.