PRCD-MAP: Learning How Much to Trust Imperfect Priors in Causal Discovery
Per-edge trust calibration boosts AUROC by up to 0.123 on real-world data.
PRCD-MAP addresses the brittle trade-off in causal discovery when using external priors of unknown reliability. Existing methods either blindly trust or ignore priors, but real priors are heterogeneously reliable—physical laws are trustworthy while LLM-suggested edges are speculative. The new soft prior-consumption layer assigns per-edge trust to an imperfect prior, using it to modulate a prior-aware ℓ1 penalty and prior-weighted ℓ2 regularizer in a MAP objective. Trust is calibrated by empirical Bayes on a Laplace-approximated marginal likelihood and propagated along the prior graph by an MLP, so data-confirmed neighborhoods boost trust while contradictions suppress it.
Empirically on real CausalTime data, PRCD-MAP exploits informative priors when present (+0.123 AUROC on AQI, +0.043 on Medical over PCMCI+), auto-attenuates on unreliable priors (Traffic stress test), and retains a lead at 300 variables. Against BayesDAG—the closest soft-Bayesian baseline—PRCD-MAP wins on every dataset under matched conditions. A four-way ablation shows EB calibration and MLP trust propagation jointly carry the plurality of gain. The method also offers a population-level safety guarantee: ε-safe in expectation over prior distributions with ε = O(d²/T), recovering a no-prior baseline when the prior is uninformative.
- Per-edge trust calibration via empirical Bayes and MLP propagation, enabling selective use of priors from LLMs or domain knowledge
- +0.123 AUROC improvement on AQI and +0.043 on Medical datasets over PCMCI+ baseline
- Safety guarantee of ε = O(d²/T) and automatic fallback to no-prior baseline when priors are uninformative
Why It Matters
Enables reliable causal discovery by intelligently weighing uncertain priors, critical for scientific and AI-driven decision-making.