BotVerse: Real-Time Event-Driven Simulation of Social Agents
New framework isolates LLM agents in a controlled environment fed by real-time Bluesky data to study disinformation.
A team of researchers from the Institute of Informatics and Telematics (CNR) in Italy has introduced BotVerse, a novel framework designed for the high-fidelity, real-time simulation of social agents powered by large language models (LLMs). The system addresses a critical ethical gap in computational social science by creating a fully isolated sandbox environment. Instead of deploying autonomous agents on live social networks, BotVerse grounds their interactions in real-time content streams—specifically from the Bluesky ecosystem—while keeping all simulation activity contained. This allows for the study of complex social dynamics, like the spread of disinformation, without the risk of polluting actual online spaces or manipulating real users.
At its core, BotVerse features an asynchronous orchestration API and a dedicated simulation engine that goes beyond simple chat. It emulates human-like temporal patterns, meaning agents don't just reply instantly; they operate on simulated schedules with cognitive memory, making their behavior more realistic. Researchers can use the accompanying Synthetic Social Observatory to deploy armies of customizable AI personas and observe their multimodal interactions. The team demonstrated the framework's utility with a coordinated disinformation scenario, showcasing its potential as a safe, experimental platform for red-teaming exercises and for social scientists to test hypotheses about online behavior at a scale and speed impossible with human subjects.
- Ethical Sandbox: Isolates LLM agent interactions in a controlled environment, using real-time Bluesky data streams to avoid risks of live network deployment.
- Human-like Simulation: Engine emulates realistic temporal patterns and cognitive memory for agents, moving beyond simple instant-message chatbots.
- Research Tool: The Synthetic Social Observatory lets researchers deploy customizable personas to safely study phenomena like disinformation at scale.
Why It Matters
Provides a crucial, ethical testing ground for understanding AI-driven social dynamics and threats like disinformation before they impact real platforms.