Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SumRank: Aligning Summarization Models for Long-Document Listwise Reranking

About

Large Language Models (LLMs) have demonstrated superior performance in listwise passage reranking task. However, directly applying them to rank long-form documents introduces both effectiveness and efficiency issues due to the substantially increased context length. To address this challenge, we propose a pointwise summarization model SumRank, aligned with downstream listwise reranking, to compress long-form documents into concise rank-aligned summaries before the final listwise reranking stage. To obtain our summarization model SumRank, we introduce a three-stage training pipeline comprising cold-start Supervised Fine-Tuning (SFT), specialized RL data construction, and rank-driven alignment via Reinforcement Learning. This paradigm aligns the SumRank with downstream ranking objectives to preserve relevance signals. We conduct extensive experiments on five benchmark datasets from the TREC Deep Learning tracks (TREC DL 19-23). Results show that our lightweight SumRank model achieves state-of-the-art (SOTA) ranking performance while significantly improving efficiency by reducing both summarization overhead and reranking complexity.

Jincheng Feng, Wenhan Liu, Zhicheng Dou• 2026

Related benchmarks

TaskDatasetResultRank
Document RankingTREC DL Track 2019 (test)
nDCG@1067.3
133
Document RankingTREC DL Track 2020 (test)
nDCG@100.6299
63
Document RerankingTREC-DL 2021
NDCG@1069.3
11
Document RerankingTREC-DL 2022
NDCG@1042.74
11
Document RerankingTREC-DL 2023
NDCG@1044.31
11
Showing 5 of 5 rows

Other info

Follow for update