Media & Culture

A distributed compute network just started a new workload

A network of bare metal servers is using revenue from one computational task to subsidize AI training.

Deep Dive

The distributed compute network Qubic has launched a novel operational model, running two distinct computational workloads in parallel on the same physical infrastructure. One set of specialized hardware handles a revenue-generating computational task, while the network's CPUs and GPUs simultaneously train neural networks. The core idea is to use the income from the first workload to subsidize the costs of the AI training compute, creating a self-funding mechanism for the infrastructure. This live test challenges the traditional economics of AI compute by exploring whether one task can effectively bankroll another.

According to a recent audit by security firm CertiK, the Qubic mainnet is processing a verified 15.52 million operations per second—a throughput that surpasses the transaction speed of the Visa network. The network operates without virtualization, running software directly on bare metal servers for efficiency. All operations and financial flows are publicly verifiable, providing transparency for this real-world experiment. The initiative raises a critical question for the AI industry: can a hybrid, subsidized model create scalable and sustainable compute infrastructure outside of centralized cloud providers?

Key Points
  • Qubic's network runs a revenue-generating computational task to fund concurrent AI neural network training on the same hardware.
  • Audited by CertiK, the network's mainnet processes 15.52 million operations per second, faster than the Visa payment network.
  • The model uses bare metal servers (no virtualization) and makes all operations publicly verifiable, testing a new economic model for AI compute.

Why It Matters

It tests a novel economic model where AI compute could become self-sustaining, potentially reducing reliance on venture capital or cloud credits.