EvoDR: Evolving Dispatching Rules via Large Language Model for Dynamic Flexible Assembly Flow Shop Scheduling
A new AI framework uses two LLMs working together to autonomously write and refine factory scheduling code.
A research team has introduced EvoDR, a novel framework that leverages the power of large language models (LLMs) to autonomously evolve dispatching rules for complex factory scheduling. The system tackles Dynamic Flexible Assembly Flow Shop Scheduling, a critical manufacturing problem involving multiple products, variable machine availability, and supply chain coordination. Traditional methods using genetic programming are limited by fixed parameters and poor interpretability. EvoDR overcomes this by modeling the scheduling challenge as a priority-sorting task on a heterogeneous graph, providing a rich structure for the AI to reason about.
EvoDR's core innovation is a dual-expert co-evolution mechanism. Two LLMs work in tandem: LLM-A acts as a 'code generator,' writing new scheduling rules in programming code, while LLM-S serves as a 'scheduling analyst,' evaluating performance and providing reflective feedback. This collaborative loop, guided by a hybrid evaluation metric, allows the system to continuously adapt and improve rules that fit dynamic production features. The AI isn't just selecting from a pre-defined set; it's generating novel, interpretable logic.
The results are compelling. In a rigorous evaluation spanning 24 different resource and disturbance scenarios—totaling 480 test instances—the rules evolved by EvoDR achieved lower average tardiness than state-of-the-art, expert-designed scheduling approaches. The framework demonstrated superior robustness, meaning its AI-generated rules performed well consistently across varying levels of complexity and disruption. This marks a significant step toward self-improving AI systems for industrial optimization, moving beyond human-coded heuristics.
- Uses a dual-LLM 'co-evolution' mechanism where one model generates code and another analyzes scheduling performance.
- Tested on 480 complex manufacturing instances across 24 scenarios, consistently beating expert-designed scheduling rules.
- Models multi-stage assembly decisions as a graph priority problem, allowing the LLM to reason about relationships between tasks and resources.
Why It Matters
Enables autonomous, adaptive optimization of complex supply chains and production lines, reducing delays without manual rule engineering.