Discovering a system’s causal relationships and structure is a crucial yet challenging problem in scientific disciplines ranging from medicine and biology to economics. While researchers typically adopt the graphical formalism of causal Bayesian networks (CBNs) to induce a graph structure that best describes these relationships, such unsupervised score-based approaches can quickly lead to prohibitively heavy computation burdens.
A research team from DeepMind, Mila – University of Montreal and Google Brain challenges the conventional causal induction approach in their new paper Learning to Induce Causal Structure, proposing a neural network architecture that learns the graph structure of observational and/or interventional data via supervised training on synthetic graphs. The team’s proposed Causal Structure Induction via Attention (CSIvA) method effectively makes causal induction a black-box problem and generalizes favourably to new synthetic and naturalistic graphs.
The team summarizes their main contributions as:
Comments are closed.