Causal Confusion in Imitation Learning
About
Behavioral cloning reduces policy learning to supervised learning by training a discriminative model to predict expert actions given observations. Such discriminative models are non-causal: the training procedure is unaware of the causal structure of the interaction between the expert and the environment. We point out that ignoring causality is particularly damaging because of the distributional shift in imitation learning. In particular, it leads to a counter-intuitive "causal misidentification" phenomenon: access to more information can yield worse performance. We investigate how this problem arises, and propose a solution to combat it through targeted interventions---either environment interaction or expert queries---to determine the correct causal model. We show that causal misidentification occurs in several benchmark control domains as well as realistic driving settings, and validate our solution against DAgger and other baselines and ablations.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Reinforcement Learning | Confounded Atari DQN Replay (test) | Alien Score954.1 | 8 | |
| Offline Imitation Learning | Atari 27 original environments (test) | Alien Return1.05e+3 | 8 | |
| Straight | CARLA 150 expert demonstrations, daytime (test) | Success Rate75 | 4 | |
| Navigation | CARLA 150 expert demonstrations, daytime (test) | Success Rate16.9 | 4 | |
| Navigation w/ dynamic obstacles | CARLA 150 expert demonstrations, daytime (test) | Success Rate18 | 4 | |
| One turn | CARLA 150 expert demonstrations, daytime (test) | Success Rate43 | 4 |