Developer Tools

What's Missing in the 'Agentic' Story

Your AI agent might not be working for you—it's working for its creator.

Deep Dive

Mark Nottingham challenges the dominant 'agentic' AI narrative by highlighting a critical blind spot: trust. He argues that for most of computing history, users could assume a machine did exactly what it promised—local laptops ran programs, and tools like screwdrivers or watches had no agency. However, modern Internet-connected devices break this assumption. Every smartphone, smart TV, or AI agent embeds the interests of its creators—chip makers, app developers, cloud providers—which may not align with the user's. For example, TVs spy on viewers, and Meta decrypted private traffic from 'research' users' phones.

Nottingham fears that the 'agentic' story—where AI acts autonomously on behalf of users—could exacerbate these trust gaps. Without explicit safeguards, agentic systems might prioritize corporate goals (e.g., data collection, ad targeting) over user intent. He calls for recognizing that trust in networked devices is fragile and often misplaced, urging the tech community to address this before agentic AI becomes ubiquitous. The article is a sobering reality check for professionals building or deploying autonomous AI systems.

Key Points
  • Local computing historically assumed machines acted only as instructed, but Internet-connected devices embed creator interests.
  • Modern examples include smart TVs spying on users and Meta decrypting private traffic from phones to competing services.
  • Agentic AI risks amplifying hidden misalignments between user intent and corporate goals without transparency safeguards.

Why It Matters

Agentic AI could automate hidden conflicts of interest, eroding user trust in autonomous systems.