Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Disentangled Sequence to Sequence Learning for Compositional Generalization

About

There is mounting evidence that existing neural network models, in particular the very popular sequence-to-sequence architecture, struggle to systematically generalize to unseen compositions of seen components. We demonstrate that one of the reasons hindering compositional generalization relates to representations being entangled. We propose an extension to sequence-to-sequence models which encourages disentanglement by adaptively re-encoding (at each time step) the source input. Specifically, we condition the source representations on the newly decoded target context which makes it easier for the encoder to exploit specialized information for each prediction rather than capturing it all in a single forward pass. Experimental results on semantic parsing and machine translation empirically show that our proposal delivers more disentangled representations and better generalization.

Hao Zheng, Mirella Lapata• 2021

Related benchmarks

TaskDatasetResultRank
Semantic ParsingCOGS (generalization)
Accuracy (Generalization)87.6
25
Semantic ParsingCOGS (test)
Exact Match Accuracy87.6
16
Machine TranslationCoGnition compositional generalization (test)
Inst. Error Rate19.7
15
Machine TranslationCoGnition ind (test)
BLEU Score70.8
5
Machine TranslationReaCT IWSLT 2014 (test)
BLEU36.1
4
Semantic ParsingCFQ Maximum Compound Divergence (MCD)
MCD178.3
4
Machine TranslationReaCT cg (test)
BLEU11.8
4
Showing 7 of 7 rows

Other info

Code

Follow for update