Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diverge to Induce Prompting: Multi-Rationale Induction for Zero-Shot Reasoning

About

To address the instability of unguided reasoning paths in standard Chain-of-Thought prompting, recent methods guide large language models (LLMs) by first eliciting a single reasoning strategy. However, relying on just one strategy for each question can still limit performance across diverse tasks. We propose Diverge-to-Induce Prompting (DIP), a framework that first prompts an LLM to generate multiple diverse high-level rationales for each question. Each rationale is then elaborated into a detailed, step-by-step draft plan. Finally, these draft plans are induced into a final plan. DIP enhances zero-shot reasoning accuracy without reliance on resource-intensive sampling. Experiments show that DIP outperforms single-strategy prompting, demonstrating the effectiveness of multi-plan induction for prompt-based reasoning.

Po-Chun Chen, Hen-Hsen Huang, Hsin-Hsi Chen• 2026

Related benchmarks

TaskDatasetResultRank
ReasoningBBH
Accuracy92.35
507
ReasoningLiveBench Reasoning
Accuracy92
80
Showing 2 of 2 rows

Other info

Follow for update