Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

The Cascade Transformer: an Application for Efficient Answer Sentence Selection

About

Large transformer-based language models have been shown to be very effective in many classification tasks. However, their computational complexity prevents their use in applications requiring the classification of a large set of candidates. While previous works have investigated approaches to reduce model size, relatively little attention has been paid to techniques to improve batch throughput during inference. In this paper, we introduce the Cascade Transformer, a simple yet effective technique to adapt transformer-based models into a cascade of rankers. Each ranker is used to prune a subset of candidates in a batch, thus dramatically increasing throughput at inference time. Partial encodings from the transformer model are shared among rerankers, providing further speed-up. When compared to a state-of-the-art transformer model, our approach reduces computation by 37% with almost no impact on accuracy, as measured on two English Question Answering datasets.

Luca Soldaini, Alessandro Moschitti• 2020

Related benchmarks

TaskDatasetResultRank
Answer Sentence SelectionWikiQA--
36
Answer Sentence SelectionTREC-QA--
24
Showing 2 of 2 rows

Other info

Follow for update