Viral Wire

Nvidia CEO Jensen Huang: Agentic AI Requires 10x More Compute Power Than Generative AI

Agentic AI demands a 1,000% compute increase over generative models, says Nvidia CEO.

Deep Dive

On May 5, 2026, Nvidia CEO Jensen Huang delivered a striking message at ServiceNow's Knowledge 2026 conference: agentic AI systems require a tenfold increase in computational power compared to generative AI models. Huang described this as a "1,000% increase" over just the past two years, underscoring the immense hardware demands of AI agents that can plan, use tools, and operate autonomously. He contrasted these with traditional generative AI, which focuses on producing text, images, or code, and noted that the shift to agentic workloads will reshape data center architectures. The statement came as Nvidia continues to dominate the AI chip market, with its Hopper and upcoming Blackwell architectures designed to handle both training and inference at extreme scales.

Huang's remarks carry significant weight for enterprise technology leaders. Agentic AI—often built on large language models (LLMs) with added reasoning and tool-use layers—introduces additional inference steps per task, driving up compute consumption. For example, an agent that must plan a multi-step workflow might consume 10x more tokens than a simple Q&A model. This trend could accelerate demand for Nvidia's GPUs, especially its Grace Hopper superchips designed for inference-heavy workloads. It also suggests that companies investing in autonomous AI agents must rethink their cloud budgets and on-premise infrastructure. As Huang put it, "The era of conversational AI was the appetizer; agentic AI is the main course."

Key Points
  • Jensen Huang said agentic AI requires a 1,000% increase in compute over generative AI.
  • The statement was made at ServiceNow's Knowledge 2026 conference on May 5, 2026.
  • Agentic AI refers to AI agents that can plan, use tools, and work autonomously, needing more compute per task.

Why It Matters

Enterprises planning agentic AI deployments must prepare for massive compute infrastructure investments.