Research & Papers

Structured Progressive Knowledge Activation for LLM-Driven Neural Architecture Search

Researchers solve the 'functional entanglement' problem in LLM-powered NAS

Deep Dive

LLMs are increasingly used to automate neural architecture search (NAS) by leveraging their coding and architectural priors to suggest edits. However, a major obstacle is functional entanglement: a single local code edit can cascade into unpredictable, global performance shifts because multiple interacting factors are inadvertently coupled. This makes LLM-generated modifications unreliable and sample-inefficient.

To address this, researchers Zhen Liu, Yuhan Liu, and Jingwen Fu propose SPARK (Structured Progressive Knowledge Activation). SPARK explicitly selects the functional factor to modify and conditions the LLM's edit on that factor, drastically reducing entangled side effects. In tests on the CLRS-DFS benchmark, SPARK achieved a 28.1x speedup in sample-efficient architecture evolution and a 22.9% relative improvement in out-of-distribution accuracy. This framework paves the way for more reliable and precise LLM-driven design of neural networks.

Key Points
  • SPARK introduces factor-conditioned editing to decouple functional entanglement in LLM-driven NAS
  • Achieves 28.1x sample-efficiency speedup on the CLRS-DFS benchmark
  • 22.9% relative improvement in out-of-distribution accuracy over baseline methods

Why It Matters

SPARK makes LLM-guided architecture search practical and efficient, accelerating AI model design without costly trial-and-error.