Chainguard is racing to fix trust in AI-built software - here's how
The security firm's new AI-driven pipeline continuously rebuilds software to eliminate bugs, monitoring twice as many packages.
At its Assemble 2026 event, security firm Chainguard announced a major expansion of its platform, targeting the inherent risks of AI-generated code. CEO Dan Lorenc framed the industry shift as a move from manual 'hand woodworking' to dangerous power tools, arguing that the majority of code will soon be written by AI. To secure this new paradigm, Chainguard is moving beyond open-source security to protect open-core software, AI agent skills, and GitHub Actions workflows.
The centerpiece is Chainguard Factory 2.0, an AI-powered, reconciling pipeline that pushes software toward a desired secure state—whether that's zero known CVEs or meeting performance constraints. The system uses multiple AI models (OpenAI, Claude, Gemini) within its proprietary Driftless agentic framework. This framework connects AI agents directly to the build factory, creating a self-healing loop where the system continuously solves problems until it meets security criteria. Early agents succeeded only 50-60% of the time, but their failures became training data to improve the models.
This agentic approach represents a shift from fragile, event-driven CI pipelines to a Kubernetes-style reconciler pattern, where AI agents constantly nudge reality toward a target description. The result is dramatic operational gains: Chainguard now monitors more than twice as many packages and has removed over 1.5 million vulnerabilities from customer production environments, a massive increase from 270,000 a year ago. The company is also offering new services built on its secure-by-design Chainguard OS, a Linux distribution bootstrapped from source, allowing customers to build their own custom, bug-free distributions.
- Factory 2.0's AI reconciler pipeline has removed over 1.5 million vulnerabilities from customer environments, up from 270k last year.
- Uses a multi-model AI approach (OpenAI, Claude, Gemini) within the Driftless agentic framework for continuous, self-healing security patching.
- Enables monitoring of twice as many software packages by automating the tracking of upstream releases and security fixes.
Why It Matters
As AI writes more code, automated, continuous security hardening becomes critical to prevent vulnerabilities at scale.