Research & Papers

GoldbachGPU: An Open Source GPU-Accelerated Framework for Verification of Goldbach's Conjecture

Open-source framework verifies Goldbach's conjecture up to 10^12 on a single RTX 3070, breaking previous memory limits.

Deep Dive

Researcher Isaac Llorente-Saguer has published GoldbachGPU v1.1.0, an open-source framework that dramatically advances the computational verification of Goldbach's conjecture—the famous, unproven hypothesis that every even integer greater than 2 can be expressed as the sum of two primes. The key breakthrough is architectural: prior GPU-based attempts hit a hard memory wall near 10^11 due to monolithic prime-table allocation. GoldbachGPU's novel segmented double-sieve design and dense bit-packed prime representation achieve a 16x memory footprint reduction, fundamentally removing the VRAM ceiling that limited previous work.

The framework's inverted verification loop combines a GPU fast-path with a multi-phase primality oracle, enabling exhaustive verification up to 10^12 on a single consumer-grade NVIDIA RTX 3070 with just 8GB of VRAM, using only 14MB per segment. This yields O(N) wall-clock time and O(1) memory scaling. A rigorous CPU fallback guarantees mathematical completeness, though it was never needed in practice. The architecture also demonstrates clean multi-GPU scaling, tested on data-center hardware with 8 x H100 GPUs. For theoretical exploration, an arbitrary-precision checker using GMP and OpenMP extends single-number verification to the astronomical scale of 10^10000 via a synchronized batch-search strategy. All code is publicly available and reproducible, offering a new benchmark for high-performance computational number theory.

Key Points
  • Breaks the 10^11 memory barrier with a segmented double-sieve design and dense bit-packing for a 16x memory reduction.
  • Verifies Goldbach's conjecture up to 10^12 on a single RTX 3070 (8GB VRAM) using only 14MB per segment, with no counterexamples found.
  • Scales cleanly to data-center hardware (8 x H100 GPUs) and includes an arbitrary-precision checker for numbers up to 10^10000.

Why It Matters

Demonstrates how algorithmic innovation can overcome hardware limitations, providing a new tool for mathematicians and a template for GPU-accelerated scientific computing.