Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning explanations that are hard to vary

About

In this paper, we investigate the principle that `good explanations are hard to vary' in the context of deep learning. We show that averaging gradients across examples -- akin to a logical OR of patterns -- can favor memorization and `patchwork' solutions that sew together different strategies, instead of identifying invariances. To inspect this, we first formalize a notion of consistency for minima of the loss surface, which measures to what extent a minimum appears only when examples are pooled. We then propose and experimentally validate a simple alternative algorithm based on a logical AND, that focuses on invariances and prevents memorization in a set of real-world tasks. Finally, using a synthetic dataset with a clear distinction between invariant and spurious mechanisms, we dissect learning signals and compare this approach to well-established regularizers.

Giambattista Parascandolo, Alexander Neitz, Antonio Orvieto, Luigi Gresele, Bernhard Sch\"olkopf• 2020

Related benchmarks

TaskDatasetResultRank
Cross-user Activity RecognitionDSADS (cross-user)
Accuracy (ABC->D)82.36
7
Cross-user Activity RecognitionPAMAP2
Acc (AB->C)58.75
7
Showing 2 of 2 rows

Other info

Follow for update