LDP: An Identity-Aware Protocol for Multi-Agent LLM Systems
New AI-native protocol cuts token use by 37% and improves attack detection to 96%.
Researcher Sunil Prakash has published a paper introducing the LLM Delegate Protocol (LDP), a new communication standard designed specifically for complex multi-agent AI systems. The protocol addresses fundamental limitations in current standards like A2A and MCP, which fail to treat critical model properties—such as identity, reasoning profile, and cost—as first-class primitives. LDP introduces five core mechanisms: rich delegate identity cards, progressive payload modes with negotiation, governed sessions with persistent context, structured provenance tracking, and protocol-level trust domains for security.
Implemented as a plugin for the JamJet agent runtime and evaluated against baselines using local Ollama models, LDP shows significant performance gains. Identity-aware routing led to a 12x reduction in latency for easy tasks by leveraging delegate specialization. The use of semantic frame payloads cut token counts by 37% with no loss in quality, while governed sessions eliminated 39% of token overhead in extended 10-round conversations. Crucially, the protocol demonstrated major architectural advantages in simulated analyses, improving attack detection from 6% to 96% and failure recovery completion from 35% to 100%.
The findings provide initial evidence that AI-native protocol design can enable more efficient and governable delegation between agents. The paper also reveals a critical insight: noisy provenance data without verification can actually degrade synthesis quality, suggesting that confidence metadata must be carefully implemented. This work contributes not only a new protocol design and reference implementation but also a framework for evaluating how communication layers constrain the emergent capabilities of multi-agent systems.
- LDP introduces delegate identity cards and semantic frames, reducing token use by 37% with no quality loss.
- Identity-aware routing achieves up to 12x lower latency by matching tasks to specialized AI models.
- Protocol-level security and governance boost attack detection to 96% and failure recovery to 100% completion.
Why It Matters
Enables more efficient, secure, and scalable AI agent systems, reducing costs and improving reliability for enterprise deployments.