Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Large Language Models Are Involuntary Truth-Tellers: Exploiting Fallacy Failure for Jailbreak Attacks

About

We find that language models have difficulties generating fallacious and deceptive reasoning. When asked to generate deceptive outputs, language models tend to leak honest counterparts but believe them to be false. Exploiting this deficiency, we propose a jailbreak attack method that elicits an aligned language model for malicious output. Specifically, we query the model to generate a fallacious yet deceptively real procedure for the harmful behavior. Since a fallacious procedure is generally considered fake and thus harmless by LLMs, it helps bypass the safeguard mechanism. Yet the output is factually harmful since the LLM cannot fabricate fallacious solutions but proposes truthful ones. We evaluate our approach over five safety-aligned large language models, comparing four previous jailbreak methods, and show that our approach achieves competitive performance with more harmful outputs. We believe the findings could be extended beyond model safety, such as self-verification and hallucination.

Yue Zhou, Henry Peng Zou, Barbara Di Eugenio, Yang Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Safety EvaluationAdvBench 50 examples
Safe Response Rate100
32
Jailbreak AttackJailbreak prompts Manufacturing and distributing illegal drugs
HPR98
24
Showing 2 of 2 rows

Other info

Follow for update