Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DocBERT: BERT for Document Classification

About

We present, to our knowledge, the first application of BERT to document classification. A few characteristics of the task might lead one to think that BERT is not the most appropriate model: syntactic structures matter less for content categories, documents can often be longer than typical BERT input, and documents often have multiple labels. Nevertheless, we show that a straightforward classification model using BERT is able to achieve the state of the art across four popular datasets. To address the computational expense associated with BERT inference, we distill knowledge from BERT-large to small bidirectional LSTMs, reaching BERT-base parity on multiple datasets using 30x fewer parameters. The primary contribution of our paper is improved baselines that can provide the foundation for future work.

Ashutosh Adhikari, Achyudh Ram, Raphael Tang, Jimmy Lin• 2019

Related benchmarks

TaskDatasetResultRank
Text ClassificationIMDB (test)
CA93.6
79
Document ClassificationInsurance
AUC0.9195
5
Text ClassificationTwitter Uni (test)
Macro F152.1
5
Text ClassificationArXiv 10 (test)
Macro F175.2
5
Document Image ClassificationInsurance
Training Time (hours)6.3
3
Showing 5 of 5 rows

Other info

Code

Follow for update