Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Unveiling and Manipulating Prompt Influence in Large Language Models

About

Prompts play a crucial role in guiding the responses of Large Language Models (LLMs). However, the intricate role of individual tokens in prompts, known as input saliency, in shaping the responses remains largely underexplored. Existing saliency methods either misalign with LLM generation objectives or rely heavily on linearity assumptions, leading to potential inaccuracies. To address this, we propose Token Distribution Dynamics (TDD), a \textcolor{black}{simple yet effective} approach to unveil and manipulate the role of prompts in generating LLM outputs. TDD leverages the robust interpreting capabilities of the language model head (LM head) to assess input saliency. It projects input tokens into the embedding space and then estimates their significance based on distribution dynamics over the vocabulary. We introduce three TDD variants: forward, backward, and bidirectional, each offering unique insights into token relevance. Extensive experiments reveal that the TDD surpasses state-of-the-art baselines with a big margin in elucidating the causal relationships between prompts and LLM outputs. Beyond mere interpretation, we apply TDD to two prompt manipulation tasks for controlled text generation: zero-shot toxic language suppression and sentiment steering. Empirical results underscore TDD's proficiency in identifying both toxic and sentimental cues in prompts, subsequently mitigating toxicity or modulating sentiment in the generated content.

Zijian Feng, Hanzhang Zhou, Zixiao Zhu, Junlang Qian, Kezhi Mao• 2024

Related benchmarks

TaskDatasetResultRank
Faithfulness EvaluationWikiBio
AUC π-Soft-NS0.59
67
Faithfulness EvaluationTellMeWhy
AUC π-Soft-NS0.00e+0
67
Attribution FaithfulnessLongRA
Soft-NC Score1.1
40
Attribution AlignmentCurated Attribution Dataset (NarrativeQA + SciQ)
DSA (Dependent Sentence Attribution)-0.27
40
Causal AttributionCausal and Downstream Robustness Ablation Suite Averaged over LLaMA-3.1 70B, Phi-3 14B, GPT-J 6B, Qwen2.5 3B
Causal Pass@557
14
Decoding StabilityCausal and Downstream Robustness Ablation Suite Averaged over 4 models
Decoding Δ%3
14
Fact CheckingCausal and Downstream Robustness Ablation Suite Averaged over 4 models
Fact EMΔ1.1
14
Span ExtractionCausal and Downstream Robustness Ablation Suite
Span F154
14
Tool UseCausal and Downstream Robustness Ablation Suite Averaged over 4 models
Tool Hit@1Δ1.3
14
Showing 9 of 9 rows

Other info

Follow for update