The Race Is on to Keep AI Agents From Running Wild With Your Credit Cards
New standards aim to prevent AI agents from going rogue with your credit cards.
The FIDO Alliance, with initial contributions from Google and Mastercard, announced on Tuesday the formation of two working groups to develop industry standards for securing payments and transactions carried out by AI agents. As agentic AI becomes more mainstream, the risk of agents being hijacked or acting on rogue instructions increases. The goal is to create a protective baseline that includes cryptographic tools to verify that an agent is legitimately executing an authenticated user's intent, as well as privacy-preserving frameworks for transparency and accountability.
Google contributed its Agent Payments Protocol (AP2), which provides cryptographic proof that a user authorized a specific agent-initiated transaction, while maintaining privacy through selective disclosure. Mastercard's Verifiable Intent framework, co-developed with Google, offers a secure mechanism for users to authorize and control agent actions. Both companies emphasized the need for rapid development, given the fast pace of AI adoption. The standards aim to prevent agent hijacking and provide recourse in disputes, building trust in agentic AI across industries.
- FIDO Alliance launches working groups with Google and Mastercard to create standards for AI agent payment security.
- Google's AP2 protocol provides cryptographic verification of user intent for agent transactions, with privacy-preserving selective disclosure.
- Mastercard's Verifiable Intent framework enables users to securely authorize and control agent actions, preventing rogue behavior.
Why It Matters
These standards are crucial to preventing AI agent fraud and building trust in autonomous transactions.