EmpiRE-Compass: A Neuro-Symbolic Dashboard for Sustainable and Dynamic Knowledge Exploration, Synthesis, and Reuse
Open-source tool uses GPT-4o mini and knowledge graphs to make literature reviews sustainable and reusable.
A team of researchers led by Oliver Karras has introduced EmpiRE-Compass, a novel dashboard designed to combat the growing crisis of low-quality, AI-generated literature reviews (LRs) in software and requirements engineering. The tool addresses a critical problem: while generative AI can produce LRs rapidly, they often lack rigor, transparency, and reusable data, creating a flood of unreliable secondary studies. EmpiRE-Compass proposes a sustainable solution by semantically structuring LR data within research knowledge graphs (RKGs) and leveraging large language models (LLMs) for dynamic access and synthesis. Its overarching goal is to transform LRs into collaborative, continuously updated, and transparent resources.
The dashboard is built with a modular system design and offers three core capabilities. First, it provides exploratory visual analytics for pre-defined, curated competency questions. Second, its neuro-symbolic synthesis engine allows users to ask custom questions, blending the structured reasoning of symbolic AI (via RKGs) with the flexible understanding of neural models (like the default GPT-4o mini). Third, it ensures all queries, analyses, and results are openly available, fostering replication and reuse. The entire project is released as open-source to encourage community adoption and extension. To manage operational costs for the hosted version, access is limited to 25 LLM requests per IP address daily. This tool represents a significant step toward making academic knowledge synthesis more rigorous and sustainable in the age of generative AI.
- Combines research knowledge graphs (RKGs) with LLMs like GPT-4o mini in a neuro-symbolic architecture for structured, reliable synthesis.
- Provides three core functions: visual analytics for curated questions, synthesis for custom queries, and full openness of all data and artifacts.
- Released as a fully open-source project with a free online dashboard (25 requests/IP/day limit) to foster adoption and combat low-quality AI reviews.
Why It Matters
Provides a rigorous, transparent framework to counter the flood of low-quality, AI-generated literature reviews, making academic synthesis sustainable.