Enterprise & Industry

Inside RSA 2026: Security Leaders Grapple With AI’s Growing Role and Risks

With 30,000 attendees, the conference highlights AI's dual role as a security tool and a major governance challenge.

Deep Dive

The RSA Conference 2026, drawing over 30,000 cybersecurity professionals, has placed the dual-edged nature of artificial intelligence at the center of industry discourse. While AI-powered tools and automation are being aggressively marketed as solutions to chronic problems like SOC alert fatigue and resource constraints, a significant portion of the conversation has pivoted to the inherent risks and governance challenges. The event serves as a clear indicator that AI adoption in security is accelerating faster than the frameworks needed to manage it, creating a pressing need for control mechanisms.

A dominant theme is the operational use of AI agents within Security Operations Centers. Experts like TechRepublic's Ken Underhill note that vendors are promoting these agents to filter overwhelming alerts and improve efficiency. However, the critical barrier remains trust. In response, emerging technical approaches involve creating systems of checks and balances, where AI tools are designed to validate each other's actions—a concept described as 'agents checking agents.' This technical safeguard aims to build reliability but underscores the complexity being introduced.

Parallel to operational discussions, AI governance has emerged as a top-tier cybersecurity priority. The central question is no longer just about capability, but about control: how much autonomy should AI have, and how is human oversight ('human-in-the-loop') maintained to prevent errors or misuse? This reflects a broader industry shift toward responsible adoption, where the speed and scale offered by AI must be tempered with accountability and explainability. The consensus from the conference is that the future of cybersecurity will be defined not just by AI's power, but by the effectiveness of the governance structures built around it.

Key Points
  • AI agents are a focal point for reducing SOC alert fatigue, but trust in automation remains a key adoption barrier.
  • Emerging technical solutions involve 'agents checking agents' to validate actions and build system reliability.
  • AI governance and maintaining human oversight are now top priorities, as adoption outpaces management frameworks.

Why It Matters

Organizations must now build governance for AI security tools with the same rigor as defending against them, balancing automation with control.