Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Zero-shot LLM-guided Counterfactual Generation: A Case Study on NLP Model Evaluation

About

With the development and proliferation of large, complex, black-box models for solving many natural language processing (NLP) tasks, there is also an increasing necessity of methods to stress-test these models and provide some degree of interpretability or explainability. While counterfactual examples are useful in this regard, automated generation of counterfactuals is a data and resource intensive process. such methods depend on models such as pre-trained language models that are then fine-tuned on auxiliary, often task-specific datasets, that may be infeasible to build in practice, especially for new tasks and data domains. Therefore, in this work we explore the possibility of leveraging large language models (LLMs) for zero-shot counterfactual generation in order to stress-test NLP models. We propose a structured pipeline to facilitate this generation, and we hypothesize that the instruction-following and textual understanding capabilities of recent LLMs can be effectively leveraged for generating high quality counterfactuals in a zero-shot manner, without requiring any training or fine-tuning. Through comprehensive experiments on a variety of propreitary and open-source LLMs, along with various downstream tasks in NLP, we explore the efficacy of LLMs as zero-shot counterfactual generators in evaluating and explaining black-box NLP models.

Amrita Bhattacharjee, Raha Moraffah, Joshua Garland, Huan Liu• 2024

Related benchmarks

TaskDatasetResultRank
Counterfactual GenerationSNLI Premise
LFR0.759
37
Counterfactual GenerationSNLI Hypothesis
LFR82.1
37
Counterfactual GenerationIMDB
LFR95.6
37
Counterfactual GenerationAG-News
LFR0.416
37
Counterfactual GenerationSST2 (test)
SLFR86.8
29
Counterfactual GenerationAG News (test)
SLFR93.5
29
Showing 6 of 6 rows

Other info

Follow for update