Agent Skills for Large Language Models: Architecture, Acquisition, Security, and the Path Forward
A massive new survey exposes critical security flaws in the AI agent ecosystem.
A comprehensive new survey reveals that 26.1% of community-contributed AI agent skills contain vulnerabilities, exposing a major security risk as LLMs shift from monolithic models to modular, skill-equipped agents. The paper details the emerging 'agent skills' architecture, where composable packages of code and instructions allow dynamic capability extension. It proposes a four-tier Skill Trust and Lifecycle Governance Framework to manage these risks and outlines seven key challenges for the future of agentic systems.
Why It Matters
The rapid move to modular AI agents introduces widespread security flaws that could compromise automated systems.