Developer Tools

NVIDIA Nemotron 3 Nano 30B MoE model is now available in Amazon SageMaker JumpStart

A new open MoE model is crushing coding and reasoning benchmarks...

Deep Dive

NVIDIA's Nemotron 3 Nano 30B MoE model is now generally available in Amazon SageMaker JumpStart. This open-weights model uses a hybrid Transformer-Mamba architecture with 3B active parameters and a 1M token context window. It leads benchmarks like SWE Bench Verified, GPQA Diamond, and AIME 2025, excelling in coding and scientific reasoning. Developers can deploy it via AWS without managing infrastructure, using managed SageMaker endpoints for generative AI applications.

Why It Matters

Developers now have a highly efficient, open model for agentic tasks that outperforms rivals under 30B parameters on key technical benchmarks.