Automating the Detection of Requirement Dependencies Using Large Language Models
New AI tool detects hidden software dependencies with 94% better accuracy than existing methods.
A research team led by Ikram Darif has introduced LEREDD, a novel AI system that automates the complex task of detecting dependencies between software requirements. This breakthrough addresses a persistent challenge in software engineering where manual dependency analysis is often skipped due to the volume and ambiguity of natural language requirements, despite being critical for preventing integration failures and development delays. The system leverages large language models' natural language processing capabilities through a sophisticated architecture combining retrieval-augmented generation (RAG) and in-context learning (ICL) techniques.
The technical evaluation demonstrates LEREDD's superior performance, achieving 0.93 accuracy and 0.84 F1 score in classifying dependent and non-dependent requirement pairs. Most impressively, it shows average relative gains of 94.87% and 105.41% in F1 scores for detecting 'Requires' dependencies compared to state-of-the-art baselines. The researchers have also released an annotated dataset of 813 requirement pairs across three systems to support further research. This represents a significant step toward fully automated requirements engineering pipelines, potentially reducing manual analysis time from hours to minutes while improving accuracy in complex software projects.
- LEREDD achieves 0.93 accuracy in detecting requirement dependencies, outperforming existing methods by up to 105%
- Combines RAG and ICL techniques to process ambiguous natural language requirements without extensive training
- Includes release of annotated dataset with 813 requirement pairs to support reproducibility and future research
Why It Matters
Automates a tedious manual process that causes costly software integration errors, potentially saving development teams hundreds of hours.