Research & Papers

Explicit integral representations and quantitative bounds for two-layer ReLU networks

A mathematical breakthrough shows function approximation errors don't depend on input dimension.

Deep Dive

Anthony Lee's new paper introduces a method to construct explicit integral representations for two-layer ReLU networks, providing relatively simple formulas for any multivariate polynomial. This is a significant theoretical advance because it offers a closed-form understanding of how these networks represent functions, moving beyond black-box approximations.

The most striking finding is that the quantitative bounds for a sharpened ReLU integral representation—involving a harmonic extension and a projection—show that approximation errors in L²(𝒟) do not depend explicitly on the input dimension or the polynomial degree. Instead, they depend only on the coefficients of the monomial expansion and the data distribution 𝒟. This suggests that shallow ReLU networks can escape the curse of dimensionality for certain function classes, a result with major implications for both theory and practice.

Key Points
  • Provides explicit integral formulas for any multivariate polynomial using two-layer ReLU networks.
  • Approximation error bounds in L²(𝒟) are independent of input dimension and polynomial degree.
  • The sharpened representation uses a harmonic extension and projection to achieve these bounds.

Why It Matters

This theoretical result could lead to more efficient training and better understanding of when shallow networks beat the curse of dimensionality.