Reduced-Mass Orbital AI Inference via Integrated Solar, Compute, and Radiator Panels
A 150-ton satellite could run 7,900 simultaneous LLM inferences from space using integrated solar, compute, and radiator panels.
A team of researchers has published a groundbreaking concept for a space-based AI supercomputer. The paper, titled 'Reduced-Mass Orbital AI Inference via Integrated Solar, Compute, and Radiator Panels,' proposes a radical new architecture where solar cells, compute hardware, and cooling radiators are integrated into modular panels. This co-location dramatically reduces mass, achieving a specific power of about 500 W/kg—over five times better than conventional designs. The system uses large vapor chamber radiators to keep chip junction temperatures near 40°C, boosting efficiency and reliability. The core innovation is scalability: a 150-ton computational satellite, composed of a 20m x 2200m grid of 16,000 panels, could be launched in a single SpaceX Starship.
This orbital compute cluster is designed for massive-scale AI inference. The authors model a satellite built from 512-panel subarrays, each capable of running a large language model (LLM) with a 500,000-token context window and 128 attention blocks. Each subarray could process inference at 553 tokens per second per session across 256 simultaneous sessions. A full 16 MW satellite could support 31 such subarrays, enabling over 7,900 concurrent LLM inferences. The concept leverages the vacuum and cold of space for superior cooling and unlimited solar power, presenting a path to computational scale unconstrained by terrestrial limits of energy, real estate, or thermal management.
- Architecture integrates solar, compute, and cooling into panels for a 5x mass efficiency gain (~500 W/kg vs. <100 W/kg).
- A single 150-ton, 16 MW satellite could fit in a Starship and run an LLM with a 500k-token context window.
- Designed for massive parallelism, supporting over 7,900 simultaneous inference sessions from orbit.
Why It Matters
It outlines a viable path to petascale AI compute in space, bypassing Earth's energy and thermal constraints for the next generation of models.