Exploring LLMs for User Story Extraction from Mockups
Adding a Language Extended Lexicon glossary to prompts significantly boosts AI-generated requirement accuracy.
Researchers Diego Firmenich and four colleagues published a 14-page study exploring how Large Language Models (LLMs) can automatically generate user stories from high-fidelity mockups. Their case study found that incorporating a Language Extended Lexicon (LEL) glossary into prompts significantly enhances the accuracy and suitability of the generated functional requirements. This represents a step toward automating parts of requirements engineering, potentially improving communication between users and developers.
Why It Matters
Automates tedious requirement documentation, speeding up agile development cycles and reducing miscommunication.