On Solving Problems of Substantially Super-linear Complexity in $N^{o(1)}$ Rounds in the MPC Model
If local memory is small, parallel speedup hits a fundamental wall.
Andrzej Lingas has published a new theoretical result on arXiv that tackles a core question in parallel computing: how fast can we solve hard problems in a large cluster? The paper, submitted on May 5, 2026, studies the Massively Parallel Computing (MPC) model—the standard abstraction for MapReduce-style frameworks. Lingas asks whether problems with substantially super-linear polynomial-time sequential complexity (like many graph and optimization tasks) can be solved in a very small number of rounds (N^(o(1)), i.e., sub-polynomial in input size N).
His answer is nuanced but fundamental: if each machine does not have a relatively large local memory and the total number of machines does not exceed N, then the exponent of the average time complexity of local computation per machine per round must be strictly larger than the exponent of the problem's sequential time complexity. In plain terms, you can't magically shrink total work by adding more machines—each node must perform a disproportionately heavy computation each round. This places a provable lower bound on the trade-off between rounds, memory, and local work, with direct implications for the design of distributed algorithms in cloud computing, big data analytics, and AI training pipelines.
- Author Andrzej Lingas proves a trade-off in the MPC model for problems with super-linear complexity: if local memory is limited, the local computation per round must scale faster than the problem's sequential complexity exponent.
- The result applies when the number of machines is ≤ N (input size) and local memory is not relatively large, ruling out ultra-fast (N^(o(1)) round) solutions in many realistic settings.
- The paper covers 8 pages and intersects distributed computing, computational complexity, and data structures (ACM class F.2.2).
Why It Matters
Limits ultra-fast parallel algorithms, guiding engineers on realistic round-time trade-offs for big data clusters.