Research & Papers

Stabilizing Private LASSO under Heterogeneous Covariates via Anisotropic Objective Perturbation

New pre-distortion technique counters covariate heterogeneity without extra privacy budget.

Deep Dive

A new paper from researchers Haruka Tanzawa and Ayaka Sakata tackles a key challenge in differentially private machine learning: handling heterogeneous covariates in high-dimensional LASSO regression. Standard approaches often require scaling covariates, but doing so under differential privacy consumes additional privacy budget and can degrade algorithm stability. The authors identify that heterogeneity causes effective anisotropy in the objective perturbation via the inverse Gram matrix, leading to instability.

To address this, they propose a Gram-based anisotropic objective perturbation—a 'pre-distortion' strategy that counteracts the distortion from covariate structure, restoring isotropy in the estimation process. Using an Approximate Message Passing (AMP) framework and state evolution analysis, they demonstrate that their method stabilizes convergence and improves both statistical efficiency and privacy performance compared to standard uniform noise injection. The theoretical results provide a path to designing stable and efficient private estimators without relying on data-dependent preprocessing, which is particularly valuable in sensitive domains like healthcare and finance where covariate scales naturally vary.

Key Points
  • Proposes Gram-based anisotropic objective perturbation to counteract covariate heterogeneity in private LASSO.
  • Uses Approximate Message Passing (AMP) and state evolution analysis to prove improved convergence stability.
  • Achieves better statistical efficiency and privacy without consuming extra privacy budget for data preprocessing.

Why It Matters

Enables robust differentially private regression on real-world data with mixed-scale features, without sacrificing accuracy or privacy budget.