Learning DAGs from Data with Few Root Causes
About
We present a novel perspective and algorithm for learning directed acyclic graphs (DAGs) from data generated by a linear structural equation model (SEM). First, we show that a linear SEM can be viewed as a linear transform that, in prior work, computes the data from a dense input vector of random valued root causes (as we will call them) associated with the nodes. Instead, we consider the case of (approximately) few root causes and also introduce noise in the measurement of the data. Intuitively, this means that the DAG data is produced by few data-generating events whose effect percolates through the DAG. We prove identifiability in this new setting and show that the true DAG is the global minimizer of the $L^0$-norm of the vector of root causes. For data with few root causes, with and without noise, we show superior performance compared to prior DAG learning methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Causal Discovery | Synthetic DAGs | TPR1 | 125 | |
| DAG learning | Synthetic (test) | SID51 | 101 | |
| DAG learning | Synthetic DAGs (100 nodes, 400 edges) v1 | SHD60 | 51 | |
| Causal Discovery | Synthetic DAG data | -- | 40 | |
| Causal Discovery | Synthetic DAG data (test) | -- | 40 | |
| DAG learning | Synthetic DAG data | Runtime (s)101.8 | 26 | |
| Causal Discovery | Synthetic Data | -- | 21 | |
| Causal Discovery | Synthetic DAG Datasets | Runtime (s)101.8 | 14 | |
| DAG learning | Synthetic Large DAGs | Runtime (s)21 | 12 | |
| Learning DAGs | Synthetic Larger DAGs v1 (test) | SHD22 | 12 |