LTBs-KAN: Linear-Time B-splines Kolmogorov-Arnold Networks
Linear-time KANs deliver 10x faster training with fewer parameters than MLPs
A team led by Andres Mendez-Vazquez and Eduardo Rodriguez-Tello introduced LTBs-KAN, a novel Kolmogorov-Arnold Network architecture that replaces the traditional Boor-Mansfield-Cox spline algorithm with a linear-time B-spline computation. This innovation reduces the inference and training complexity from quadratic O(n²) to linear O(n), effectively removing the computational bottleneck that has limited KAN adoption.
The researchers further optimized model size by 30% using product-of-sums matrix factorization in the forward pass without sacrificing performance. On standard vision benchmarks (MNIST, Fashion-MNIST, CIFAR-10), LTBs-KAN achieved comparable accuracy to existing KAN implementations while delivering substantial speed improvements. This breakthrough makes KANs practical for real-world applications where latency matters.
- LTs-KAN reduces KAN computation from O(n²) to O(n) using linear-time B-splines, eliminating the Boor-Mansfield-Cox bottleneck
- Product-of-sums matrix factorization trims 30% of parameters without accuracy loss
- Achieves comparable accuracy to existing KANs on MNIST, Fashion-MNIST, and CIFAR-10 with faster training
Why It Matters
Unlocks explainable KAN models for real-time applications by solving their 10x speed disadvantage over MLPs