Research & Papers

SDSL-Solver: Scalable Distributed Sparse Linear Solvers for Large-Scale Interior Point Methods

New distributed solver handles 5M+ variables with 7.77x speedup over PETSc

Deep Dive

SDSL-Solver, developed by Shaofeng Yang and colleagues, tackles the computational bottleneck in interior point methods (IPMs) where sparse linear system solving consumes over 70% of total time. The framework employs Krylov subspace methods enhanced with numerics-based sparse filtering and diagonal correction to produce high-quality preconditioners. It offers two distributed parallel methods: Block Jacobi for well-conditioned systems and Bordered Block Diagonal (BBD) for ill-conditioned problems requiring global Schur complement preconditioning. A preconditioner reuse strategy amortizes construction costs across IPM iterations.

Benchmarked on matrices from tens of thousands to over 5 million variables on multi-node X86 clusters, SDSL-Solver on a four-node setup achieves average speedups of 6.23x (Block Jacobi) and 7.77x (BBD) over PETSc on the same nodes. Against single-node PARDISO, speedups reach 97.54x and 5.85x respectively. The work addresses critical scalability issues in optimization for logistics, finance, and machine learning.

Key Points
  • Solves sparse linear systems consuming over 70% of IPM optimization time
  • Two distributed methods: Block Jacobi (diagonally dominant) and BBD (ill-conditioned with Schur complement)
  • Up to 97.54x faster than PARDISO on a single node, 7.77x faster than PETSc on 4 nodes

Why It Matters

Enables faster, scalable optimization for million-variable problems in logistics, finance, and ML.