Call-Chain-Aware LLM-Based Test Generation for Java Projects
New approach uses call-chain context to generate better unit tests for complex Java projects.
Researchers from the University of Ottawa and Huawei have introduced CAT, a novel LLM-based test generation approach that explicitly incorporates call-chain and dependency contexts into prompts via dedicated static analysis. Unlike existing methods that rely on execution-path information, CAT systematically models caller-callee relationships, object constructors, and third-party dependencies to construct executable, semantically valid test contexts. The approach also supports iterative test fixing when generation failures occur, making it robust for complex Java projects with deep call chains and intricate object initialization.
Evaluated on the Defects4J benchmark and four real-world GitHub projects released after the LLM's cut-off date, CAT achieved significant improvements over the state-of-the-art PANTA approach: line coverage improved by 18.04% and branch coverage by 21.74% across Defects4J projects. On post-cutoff projects, CAT consistently delivered superior performance. An ablation study confirmed that call-chain and dependency contexts are critical to these gains. The work highlights the potential of combining static analysis with LLMs for more effective automated test generation.
- CAT improves line coverage by 18.04% and branch coverage by 21.74% over PANTA on Defects4J
- Approach models caller-callee, constructors, and third-party dependencies via static analysis
- Includes iterative test fixing to handle generation failures
Why It Matters
Better automated test generation reduces manual effort and increases software reliability for complex Java projects.