Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning Permutation Distributions via Reflected Diffusion on Ranks

About

The finite symmetric group S_n provides a natural domain for permutations, yet learning probability distributions on S_n is challenging due to its factorially growing size and discrete, non-Euclidean structure. Recent permutation diffusion methods define forward noising via shuffle-based random walks (e.g., riffle shuffles) and learn reverse transitions with Plackett-Luce (PL) variants, but the resulting trajectories can be abrupt and increasingly hard to denoise as n grows. We propose Soft-Rank Diffusion, a discrete diffusion framework that replaces shuffle-based corruption with a structured soft-rank forward process: we lift permutations to a continuous latent representation of order by relaxing discrete ranks into soft ranks, yielding smoother and more tractable trajectories. For the reverse process, we introduce contextualized generalized Plackett-Luce (cGPL) denoisers that generalize prior PL-style parameterizations and improve expressivity for sequential decision structures. Experiments on sorting and combinatorial optimization benchmarks show that Soft-Rank Diffusion consistently outperforms prior diffusion baselines, with particularly strong gains in long-sequence and intrinsically sequential settings.

Sizhuang He, Yangtian Zhang, Shiyang Zhang, David van Dijk• 2026

Related benchmarks

TaskDatasetResultRank
Traveling Salesman ProblemTSP N=20
Optimality Gap0.34
33
Traveling Salesperson ProblemTSP-50
Tour Length5.7157
6
Showing 2 of 2 rows

Other info

Follow for update