Developer Tools

CircuChain: Disentangling Competence and Compliance in LLM Circuit Analysis

New research shows top AI models get circuit analysis right but ignore user-defined rules 100% of the time.

Deep Dive

Researcher Mayank Ravishankara introduced CircuChain, a diagnostic benchmark for testing LLMs in electrical circuit analysis. It uses 100 Control/Trap problem pairs across five circuit topologies to separate physical reasoning competence from instruction compliance. The key finding is a Compliance-Competence Divergence: the strongest models tested showed near-perfect physics but high rates of convention violations when instructions contradicted their training priors, revealing that increased capability doesn't guarantee constraint alignment.

Why It Matters

For deploying AI in safety-critical engineering, models must reliably follow explicit rules, not just produce correct-looking answers.