An Agentic Operationalization of DISARM for FIMI Investigation on Social Media
Researchers' AI system uncovered 30+ hidden Russian bot accounts targeting Moldova's 2025 election.
A team of researchers including Kevin Tseng and Phil Tinn has published a paper detailing a novel AI system that automates the detection of foreign influence campaigns. The work presents an "agentic operationalization" of the DISARM framework, a standardized analytical schema used by NATO and allied partners to characterize Foreign Information Manipulation and Interference (FIMI). The core innovation is a pipeline of coordinated AI agents that scan social media data, identify candidate manipulative behaviors, and transparently map these activities to the DISARM taxonomy through auditable reasoning steps.
This approach directly addresses a critical challenge: while frameworks like DISARM exist for analysis, applying them at scale for automated detection has been difficult. The researchers' system is framework-agnostic and designed to integrate general agentic AI components. In an evaluation on two practitioner-annotated, real-world datasets, the AI-augmented workflow proved it could effectively scale analytic processes that are traditionally manual, time-intensive, and reliant on expert interpretation.
The impact was demonstrated in a striking finding. During a pilot investigation, the AI system surfaced more than 30 previously undetected Russian bot accounts that were deployed to target the 2025 election in Moldova—accounts that had been missed in a prior, non-AI-assisted investigation. By enhancing analytic throughput, interoperability between partners, and the explainability of findings, this research provides a tangible tool for defense policy and planning. It aims to improve situational awareness and enable rapid assessment of threats within the global information environment, a domain where AI is already lowering the barrier for adversaries to conduct large-scale, automated manipulation.
- The system uses coordinated AI agents to automate the application of the DISARM framework, a NATO-standard for analyzing information threats.
- In a real-world test, it uncovered over 30 previously hidden Russian bot accounts targeting Moldova's 2025 election, demonstrating superior detection capability.
- It transforms manual, interpretation-heavy analytic workflows into scalable, transparent processes, directly addressing defense needs for rapid threat assessment.
Why It Matters
Provides defenders with an AI-powered tool to automatically detect and analyze foreign influence campaigns at scale, countering AI-augmented disinformation.