Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher

About

Safety lies at the core of the development of Large Language Models (LLMs). There is ample work on aligning LLMs with human ethics and preferences, including data filtering in pretraining, supervised fine-tuning, reinforcement learning from human feedback, and red teaming, etc. In this study, we discover that chat in cipher can bypass the safety alignment techniques of LLMs, which are mainly conducted in natural languages. We propose a novel framework CipherChat to systematically examine the generalizability of safety alignment to non-natural languages -- ciphers. CipherChat enables humans to chat with LLMs through cipher prompts topped with system role descriptions and few-shot enciphered demonstrations. We use CipherChat to assess state-of-the-art LLMs, including ChatGPT and GPT-4 for different representative human ciphers across 11 safety domains in both English and Chinese. Experimental results show that certain ciphers succeed almost 100% of the time to bypass the safety alignment of GPT-4 in several safety domains, demonstrating the necessity of developing safety alignment for non-natural languages. Notably, we identify that LLMs seem to have a ''secret cipher'', and propose a novel SelfCipher that uses only role play and several demonstrations in natural language to evoke this capability. SelfCipher surprisingly outperforms existing human ciphers in almost all cases. Our code and data will be released at https://github.com/RobustNLP/CipherChat.

Youliang Yuan, Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Pinjia He, Shuming Shi, Zhaopeng Tu• 2023

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackHarmBench--
376
Jailbreak AttackSafeBench
ASR0.00e+0
112
Jailbreak DefenseJBB-Behaviors
ASR0.00e+0
101
Persona ManipulationANTHR (test)
Success Score87.08
72
Persona ManipulationBFI (test)
Success Score90.61
72
Persona ManipulationMPI (test)
Success Score70.42
72
JailbreakingAdvBench--
44
JailbreakAdvBench Ensemble configuration GPT-4o
Attack Success Rate (ASR)10
25
JailbreakingAdvBench (test)
ASR (GPT-3.5)41.5
12
Jailbreak AttackClaude 3.5
ASR6.5
10
Showing 10 of 11 rows

Other info

Follow for update