From Chat to Interview: Agentic Requirements Elicitation with an Experience Ontology
New agent beats baselines by 33%, automates structured requirements interviews with ontology-guided reasoning.
Requirements elicitation interviews are critical yet time-consuming, often missing implicit requirements because they rely heavily on an analyst's experience. Large language models (LLMs) can automate the conversation, but pure chat-based approaches lack structure and can ask redundant questions. Researchers at Peking University and other Chinese institutions propose OntoAgent, an AI agent that mimics the structured cognitive framework experienced analysts use.
OntoAgent first constructs an experience ontology from domain-specific descriptions, organizing requirement concerns into a hierarchical structure. During interviews, it iteratively performs four operations: ParseUser to understand the current input, ScoreOnto to rank concerns, ReRankOnto to refine priority, and GatePrune to filter irrelevant topics. The selected concern is then combined with dialogue context to generate a precise elicitation question. This guided approach ensures systematic coverage without unnecessary repetition.
Quantitative experiments on website application datasets show OntoAgent significantly outperforms existing methods. It achieves a 33% improvement in Interview Requirements Effectiveness (IRE) and a 21% improvement in Task-oriented Knowledge Query Rate (TKQR). Ablation studies confirm each component contributes meaningfully. A qualitative user study further validated its practical benefits in real-world scenarios, suggesting the approach could extend to other domains like finance or healthcare.
- OntoAgent uses an experience ontology to systematically uncover implicit requirements, avoiding redundant questions common in free-form LLM chats.
- Achieves 33% higher IRE and 21% higher TKQR over baseline methods in website domain tests.
- The agent runs four ontology-guided operations (ParseUser, ScoreOnto, ReRankOnto, GatePrune) to select relevant concerns before generating each question.
Why It Matters
Automating expert-level requirements interviews reduces missed features and redundant questions, speeding up software development by 20-30%.