Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Are LLMs Reliable Rankers? Rank Manipulation via Two-Stage Token Optimization

About

Large language models (LLMs) are increasingly used as rerankers in information retrieval, yet their ranking behavior can be steered by small, natural-sounding prompts. To expose this vulnerability, we present Rank Anything First (RAF), a two-stage token optimization method that crafts concise textual perturbations to consistently promote a target item in LLM-generated rankings while remaining hard to detect. Stage 1 uses Greedy Coordinate Gradient to shortlist candidate tokens at the current position by combining the gradient of the rank-target with a readability score; Stage 2 evaluates those candidates under exact ranking and readability losses using an entropy-based dynamic weighting scheme, and selects a token via temperature-controlled sampling. RAF generates ranking-promoting prompts token-by-token, guided by dual objectives: maximizing ranking effectiveness and preserving linguistic naturalness. Experiments across multiple LLMs show that RAF significantly boosts the rank of target items using naturalistic language, with greater robustness than existing methods in both promoting target items and maintaining naturalness. These findings underscore a critical security implication: LLM-based reranking is inherently susceptible to adversarial manipulation, raising new challenges for the trustworthiness and robustness of modern retrieval systems. Our code is available at: https://github.com/glad-lab/RAF.

Tiancheng Xing, Jerry Li, Yixuan Du, Xiyang Hu• 2025

Related benchmarks

TaskDatasetResultRank
Output RankingProductBench Home & Kitchen
Top-5 Accuracy78
28
Promotion SuccessElectronics
Top-5 Accuracy0.845
28
Promotion Success RateTools & Home Improvement
Top-5 Accuracy81
28
Showing 3 of 3 rows

Other info

Follow for update