Research & Papers

DynaRAG: Bridging Static and Dynamic Knowledge in Retrieval-Augmented Generation

New RAG system uses Gorilla v2 to call APIs when its knowledge is outdated, cutting hallucinations.

Deep Dive

A team of researchers has introduced DynaRAG, a novel retrieval-augmented generation (RAG) framework designed to overcome a key limitation of current systems: static knowledge. Traditional RAG relies on a fixed document corpus, which can become outdated. DynaRAG bridges this gap by dynamically integrating live, time-sensitive information. When a user's query requires current data, the system can selectively invoke external APIs to fetch the latest facts, moving beyond its static knowledge base.

The architecture is sophisticated, employing multiple AI components for precision. An LLM-based reranker first assesses the relevance of retrieved documents. A separate sufficiency classifier then determines if the static information is adequate or if a fallback to an API is necessary. For the dynamic calls, the system leverages Gorilla v2, a state-of-the-art model specialized for accurate API tool invocation. To ensure robustness, it also uses schema filtering via FAISS to guide the selection of the correct API.

Evaluations on the CRAG benchmark show promising results. DynaRAG significantly boosts accuracy on questions requiring dynamic knowledge, such as recent news or stock prices, compared to static-only RAG systems. Furthermore, by grounding its answers in verified documents or live API data, the framework demonstrably reduces the model's tendency to hallucinate or generate incorrect information. This research highlights the critical need for dynamic-aware routing and selective tool use in building reliable, real-world question-answering AI agents.

Key Points
  • Dynamically combines static documents with live API calls using the Gorilla v2 model for tool invocation.
  • Uses an LLM-based sufficiency classifier to decide when static knowledge is insufficient, triggering a dynamic fetch.
  • Shows significant accuracy gains on the CRAG benchmark for time-sensitive queries and reduces hallucinations.

Why It Matters

Enables AI assistants to provide accurate, up-to-date answers on current events, finance, and news, moving beyond canned knowledge.