Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pretraining with Artificial Language: Studying Transferable Knowledge in Language Models

About

We investigate what kind of structural knowledge learned in neural network encoders is transferable to processing natural language. We design artificial languages with structural properties that mimic natural language, pretrain encoders on the data, and see how much performance the encoder exhibits on downstream tasks in natural language. Our experimental results show that pretraining with an artificial language with a nesting dependency structure provides some knowledge transferable to natural language. A follow-up probing analysis indicates that its success in the transfer is related to the amount of encoded contextual information and what is transferred is the knowledge of position-aware context dependence of language. Our results provide insights into how neural network encoders process human languages and the source of cross-lingual transferability of recent multilingual language models.

Ryokan Ri, Yoshimasa Tsuruoka• 2022

Related benchmarks

TaskDatasetResultRank
Machine Reading ComprehensionSQuAD 1.1 (test)
EM50.3
46
Semantic ParsingmTOP (test)--
17
Code TranslationCode Trans. (test)
Exact Match (EM)58.8
8
Pre-training EvaluationAggregated Downstream Tasks (test)
Average EM50.2
8
RetrosynthesisUSPTO Retrosynthesis 50K (test)
EM40.4
8
Semantic ParsingWEBQSP (test)
EM58.5
8
SummarizationCNNDM 10K (test)
ROUGE-127.1
8
Showing 7 of 7 rows

Other info

Follow for update