Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hierarchical Transformers for Long Document Classification

About

BERT, which stands for Bidirectional Encoder Representations from Transformers, is a recently introduced language representation model based upon the transfer learning paradigm. We extend its fine-tuning procedure to address one of its major limitations - applicability to inputs longer than a few hundred words, such as transcripts of human call conversations. Our method is conceptually simple. We segment the input into smaller chunks and feed each of them into the base model. Then, we propagate each output through a single recurrent layer, or another transformer, followed by a softmax activation. We obtain the final classification decision after the last segment has been consumed. We show that both BERT extensions are quick to fine-tune and converge after as little as 1 epoch of training on a small, domain-specific data set. We successfully apply them in three different tasks involving customer call satisfaction prediction and topic classification, and obtain a significant improvement over the baseline models in two of them.

Raghavendra Pappagari, Piotr \.Zelasko, Jes\'us Villalba, Yishay Carmiel, Najim Dehak• 2019

Related benchmarks

TaskDatasetResultRank
Document ClassificationHP (test)
Accuracy89.54
10
Document ClassificationEURLEX57K (test)
Micro F167.57
8
Document ClassificationLUN (test)
Accuracy36.97
7
Document ClassificationEURLEX57K Inverted (test)
Micro F167.31
7
Long Document ClassificationLDC benchmark
Overall Performance (HYP)86.2
7
Showing 5 of 5 rows

Other info

Follow for update