Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Blockwise Self-Attention for Long Document Understanding

About

We present BlockBERT, a lightweight and efficient BERT model for better modeling long-distance dependencies. Our model extends BERT by introducing sparse block structures into the attention matrix to reduce both memory consumption and training/inference time, which also enables attention heads to capture either short- or long-range contextual information. We conduct experiments on language model pre-training and several benchmark question answering datasets with various paragraph lengths. BlockBERT uses 18.7-36.1% less memory and 12.0-25.1% less time to learn the model. During testing, BlockBERT saves 27.8% inference time, while having comparable and sometimes better prediction accuracy, compared to an advanced BERT-based model, RoBERTa.

Jiezhong Qiu, Hao Ma, Omer Levy, Scott Wen-tau Yih, Sinong Wang, Jie Tang• 2019

Related benchmarks

TaskDatasetResultRank
RegressionStability
Spearman Correlation0.6509
12
Secondary Structure PredictionCASP 12
F1 Score62.28
6
Secondary Structure PredictionTS115
F1 Score64.72
6
Fluorescence predictionFluorescence
Spearman's rho (ρ)0.6998
6
Secondary Structure PredictionCB513
F1 Score62.02
6
Showing 5 of 5 rows

Other info

Follow for update