Developer Tools

Can your governance keep pace with your AI ambitions? AI risk intelligence in the agentic era

AWS's new AI Risk Intelligence platform tackles non-deterministic agentic workflows where traditional security fails.

Deep Dive

AWS Generative AI Innovation Center has launched AI Risk Intelligence (AIRI), a new platform designed to automate governance for the unpredictable world of agentic AI. Unlike traditional DevOps with predictable inputs and outputs, agentic systems operate non-deterministically, selecting different tools and approaches each time, which creates complex security and compliance gaps. AIRI addresses this by integrating security directly into agent operations, providing a single viewpoint that spans the entire agentic lifecycle from design to post-production.

AIRI is built on AWS's Responsible AI Best Practices Framework and is designed to be framework-agnostic. It automates the assessment of security, operations, and governance controls, transforming static guidelines like the NIST AI Risk Management Framework and OWASP Top 10 for LLMs into continuous, embedded evaluations. The platform specifically tackles systemic risks, such as 'Tool Misuse and Exploitation,' where an authorized agent can be tricked into performing malicious actions within its granted permissions, a scenario traditional data loss prevention tools often miss.

The solution aims to close visibility gaps for business stakeholders by making observability metrics interpretable without deep technical expertise. By treating security, operations, and governance as interdependent dimensions, AIRI helps organizations manage the cascading risks inherent in multi-agent coordination, where a vulnerability in one agent can trigger a chain reaction across the system. This shift is critical for deploying trusted AI systems at scale in the new agentic era.

Key Points
  • AIRI automates governance for non-deterministic agentic AI, where asking the same question twice yields different answers and workflows.
  • It operationalizes frameworks like NIST AI RMF and OWASP, turning static documents into continuous, embedded risk assessments.
  • The platform tackles systemic risks like 'Tool Misuse,' where agents misuse legitimate permissions, a blind spot for traditional security tools.

Why It Matters

Enables enterprises to safely scale autonomous AI agents by automating governance for risks that static IT frameworks cannot address.