Research & Papers

Deep-testing: the case of dependence detection

Neural networks can now detect hidden relationships in data better than 19 other methods...

Deep Dive

In a new paper on arXiv, statisticians Gery Geenens, Pierre Lafaye de Micheaux, and Ivan Muyun Zou introduce deep-testing, a method that applies deep learning to hypothesis testing. The core idea is to train a neural network on simulated data from both the null and alternative hypotheses, turning the test statistic into a learned classification map. This leverages the network's strong discriminative power to construct highly powerful tests, moving beyond traditional statistical approaches.

As a proof of concept, the authors apply deep-testing to independence testing, one of the most fundamental problems in statistics. In a large-scale simulation study, their method outperforms 19 existing methods across a wide range of complex dependence structures, including non-linear and high-dimensional relationships. The results suggest deep learning can revolutionize classical inference by providing more sensitive and flexible tools for detecting patterns in data.

Key Points
  • Deep-testing uses a neural network to learn a classification map for hypothesis testing, replacing traditional test statistics.
  • The method is demonstrated on independence testing, achieving the highest overall power against 19 competing methods.
  • It excels at detecting complex dependence structures, including non-linear and high-dimensional relationships.

Why It Matters

Deep-testing could transform statistical inference by making hypothesis tests more powerful and adaptable to complex real-world data.