Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

iFlip: Iterative Feedback-driven Counterfactual Example Refinement

About

Counterfactual examples are minimal edits to an input that alter a model's prediction. They are widely employed in explainable AI to probe model behavior and in natural language processing (NLP) to augment training data. However, generating valid counterfactuals with large language models (LLMs) remains challenging, as existing single-pass methods often fail to induce reliable label changes, neglecting LLMs' self-correction capabilities. To explore this untapped potential, we propose iFlip, an iterative refinement approach that leverages three types of feedback, including model confidence, feature attribution, and natural language. Our results show that iFlip achieves an average 57.8% higher validity than the five state-of-the-art baselines, as measured by the label flipping rate. The user study further corroborates that iFlip outperforms baselines in completeness, overall satisfaction, and feasibility. In addition, ablation studies demonstrate that three components are paramount for iFlip to generate valid counterfactuals: leveraging an appropriate number of iterations, pointing to highly attributed words, and early stopping. Finally, counterfactuals generated by iFlip enable effective counterfactual data augmentation, substantially improving model performance and robustness.

Yilong Wang, Qianli Wang, Nils Feldhus• 2026

Related benchmarks

TaskDatasetResultRank
Counterfactual GenerationIMDB
LFR100
37
Counterfactual GenerationAG-News
LFR0.915
37
Counterfactual GenerationSNLI Hypothesis
LFR83
37
Counterfactual GenerationSNLI Premise
LFR0.73
37
Counterfactual GenerationAG News (test)--
29
Showing 5 of 5 rows

Other info

Follow for update