Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Ranked Voting based Self-Consistency of Large Language Models

About

Majority voting is considered an effective method to enhance chain-of-thought reasoning, as it selects the answer with the highest "self-consistency" among different reasoning paths (Wang et al., 2023). However, previous chain-of-thought reasoning methods typically generate only a single answer in each trial, thereby ignoring the possibility of other potential answers. As a result, these alternative answers are often overlooked in subsequent voting processes. In this work, we propose to generate ranked answers in each reasoning process and conduct ranked voting among multiple ranked answers from different responses, thereby making the overall self-consistency more reliable. Specifically, we use three ranked voting methods: Instant-runoff voting, Borda count voting, and mean reciprocal rank voting. We validate our methods on six datasets, including three multiple-choice and three open-ended question-answering tasks, using both advanced open-source and closed-source large language models. Extensive experimental results indicate that our proposed method outperforms the baselines, showcasing the potential of leveraging the information of ranked answers and using ranked voting to improve reasoning performance. The code is available at https://github.com/szu-tera/RankedVotingSC.

Weiqin Wang, Yile Wang, Hui Huang• 2025

Related benchmarks

TaskDatasetResultRank
Information Visual Question AnsweringInfoVQA (test)
ANLS83.6
130
Multi-modal Video UnderstandingMVBench--
65
Multimodal UnderstandingMMBench EN v1.1
Accuracy86
63
Video Multimodal UnderstandingVideoMMMU
Accuracy65.1
47
Multimodal ReasoningV* Bench Tool-needed
Accuracy86.9
15
Showing 5 of 5 rows

Other info

Follow for update