MotionBricks: Scalable Real-Time Motions with Modular Latent Generative Model and Smart Primitives
A single model handles 350,000 motion clips in real-time, no expert needed.
MotionBricks, developed by researchers from NVIDIA, University of Toronto, and other institutions, tackles two core challenges in generative motion synthesis: real-time scalability and integration. The framework's modular latent generative backbone models over 350,000 motion clips in a single model, achieving a remarkable 15,000 FPS throughput with just 2ms latency—a leap over traditional real-time methods that degrade under such constraints. Smart primitives provide a unified interface for authoring navigation and object interactions, allowing users to assemble animations like building blocks without animation expertise.
Quantitative results show state-of-the-art motion quality on both open-source and proprietary datasets. The framework's flexibility is demonstrated in a production-level animation demo covering diverse styles and interactions, and its generalization is proven by deployment on the Unitree G1 humanoid robot for real-time robotic control. Accepted at SIGGRAPH 2026, MotionBricks promises to democratize high-quality animation and enable responsive, interactive AI in gaming, simulation, and robotics.
- Single model handles 350,000+ motion clips with 15,000 FPS and 2ms latency.
- Smart primitives enable plug-and-play animation authoring without expert knowledge.
- Deployed on Unitree G1 humanoid robot for real-time robotic control.
Why It Matters
MotionBricks makes high-quality, real-time animation accessible for gaming, simulation, and robotics without specialized skills.