Bounds and Identification of Joint Probabilities of Potential Outcomes and Observed Variables under Monotonicity Assumptions
Researchers propose novel monotonicity assumptions to solve a fundamental challenge in causal machine learning.
A team of researchers has introduced a new methodological framework to address a persistent challenge in causal machine learning. The paper, 'Bounds and Identification of Joint Probabilities of Potential Outcomes and Observed Variables under Monotonicity Assumptions' by Naoya Hashimoto, Yuta Kawakami, and Jin Tian, focuses on settings with discrete treatment and discrete ordinal outcomes—common in fields like medicine and policy analysis. The core contribution is the proposal of new, structured families of monotonicity assumptions. These assumptions, which posit that a treatment will not make an outcome worse for any individual, are used to transform the problem of bounding unobservable joint probabilities into a tractable linear programming problem. This formulation allows researchers to compute the tightest possible bounds on causal effects given the observed data and the assumed constraints.
Furthermore, the authors introduce a specific, targeted monotonicity assumption designed not just for bounding, but for achieving full point identification—where the exact causal effect can be pinpointed rather than just bounded. The methodology is demonstrated through numerical experiments and applied to real-world datasets, validating its practical utility. This work is significant because it provides a more systematic and potentially tighter way to reason about causality when perfect experimental data is unavailable. It advances the toolkit for building robust, causal AI systems that underpin everything from drug efficacy models to economic policy simulations, moving beyond mere correlation to more defensible claims about what actually causes what.
- Proposes new families of monotonicity assumptions to constrain causal inference problems with discrete variables.
- Formulates the bounding of joint probabilities as a linear programming problem, enabling computation of tight theoretical limits.
- Introduces a specific assumption to achieve full point identification, moving from a range of possible effects to a single estimate.
Why It Matters
Provides a more rigorous mathematical foundation for causal AI, crucial for trustworthy applications in healthcare, economics, and policy.