Enterprise & Industry

Nvidia's 'ChatGPT moment' for self-driving cars, and other key AI announcements at GTC 2026

Nvidia CEO Jensen Huang declares the 'ChatGPT moment for self-driving cars' has arrived, backed by a major Uber robotaxi deal.

Deep Dive

Nvidia's GTC 2026 conference was dominated by announcements for 'physical AI'—AI systems embedded in robots and vehicles. CEO Jensen Huang declared the 'ChatGPT moment for self-driving cars has arrived,' backed by a major expansion of its partnership with Uber. The companies plan to launch a fleet of autonomous vehicles powered by Nvidia's Drive AV software, starting in Los Angeles and San Francisco in 2027 and scaling to 28 cities across four continents by 2028. The initiative also added automakers like BYD, Hyundai, and Nissan, all leveraging Nvidia's Drive Hyperion platform and Alpamayo models to train Level 4 autonomous systems.

To power this physical AI future, Nvidia unveiled three new foundation models. For autonomous vehicles, the Alpamayo 1.5 model processes driving video and natural language prompts to generate safer driving trajectories. For robotics, the Isaac GR00T N1.7 is an open, commercially viable vision-language-action model designed to scale humanoid robot deployment. Finally, Cosmos 3 generates synthetic worlds to train these AI systems in complex, simulated environments before they hit the real world. The company demonstrated the potential with a robotic version of Disney's Olaf, hinting at future interactive characters in theme parks.

Key Points
  • Uber partnership targets robotaxi fleet in 28 cities by 2028, starting in LA and SF in 2027.
  • New Alpamayo 1.5 model uses natural language prompts to improve self-driving car navigation and safety.
  • Isaac GR00T N1.7 model aims to make humanoid robots commercially viable for real-world deployment.

Why It Matters

This accelerates the timeline for commercial robotaxis and advanced robotics, moving AI from software into tangible, autonomous machines.