Image & Video

Nucleus-Image Released

The first fully open-source MoE diffusion model matches top-tier quality while activating only 2B parameters per pass.

Deep Dive

NucleusAI has launched Nucleus-Image, a groundbreaking text-to-image generation model that redefines efficiency in high-quality AI art creation. Built on a sparse mixture-of-experts (MoE) diffusion transformer architecture, the model scales to a massive 17 billion total parameters. However, its innovative design means it only activates approximately 2 billion parameters during any single forward pass. This selective activation establishes what the developers call a "new Pareto frontier," delivering top-tier image quality while consuming significantly less computational power than dense models of comparable capability.

In benchmark tests on GenEval, DPG-Bench, and OneIG-Bench, Nucleus-Image matches or exceeds the performance of leading closed and open models, including Qwen-Image, GPT Image 1, Seedream 3.0, and Imagen4. Impressively, these results are from the base model with no post-training optimization like DPO, reinforcement learning, or human preference tuning. Most significantly, NucleusAI is releasing the complete package: full model weights, training code, and the dataset, making Nucleus-Image the first fully open-source MoE diffusion model at this elite quality tier. This move provides researchers and developers unprecedented access to study and build upon a state-of-the-art, efficient architecture.

Key Points
  • Uses a 17B parameter sparse MoE architecture but activates only ~2B per pass for extreme efficiency.
  • Benchmark performance matches or beats top models like Qwen-Image and GPT Image 1 without any post-training tuning.
  • Fully open-source release includes model weights, training code, and dataset—a first for a high-quality MoE diffusion model.

Why It Matters

It provides an open, efficient blueprint for high-quality image generation, lowering the barrier for research and commercial deployment.