GenBFA: An Evolutionary Optimization Approach to Bit-Flip Attacks on LLMs
About
Large Language Models (LLMs) have revolutionized natural language processing (NLP), excelling in tasks like text generation and summarization. However, their increasing adoption in mission-critical applications raises concerns about hardware-based threats, particularly bit-flip attacks (BFAs). BFAs, enabled by fault injection methods such as Rowhammer, target model parameters in memory, compromising both integrity and performance. Identifying critical parameters for BFAs in the vast parameter space of LLMs poses significant challenges. While prior research suggests transformer-based architectures are inherently more robust to BFAs compared to traditional deep neural networks, we challenge this assumption. For the first time, we demonstrate that as few as three bit-flips can cause catastrophic performance degradation in an LLM with billions of parameters. Current BFA techniques are inadequate for exploiting this vulnerability due to the difficulty of efficiently identifying critical parameters within the immense parameter space. To address this, we propose AttentionBreaker, a novel framework tailored for LLMs that enables efficient traversal of the parameter space to identify critical parameters. Additionally, we introduce GenBFA, an evolutionary optimization strategy designed to refine the search further, isolating the most critical bits for an efficient and effective attack. Empirical results reveal the profound vulnerability of LLMs to AttentionBreaker. For example, merely three bit-flips (4.129 x 10^-9% of total parameters) in the LLaMA3-8B-Instruct 8-bit quantized (W8) model result in a complete performance collapse: accuracy on MMLU tasks drops from 67.3% to 0%, and Wikitext perplexity skyrockets from 12.6 to 4.72 x 10^5. These findings underscore the effectiveness of AttentionBreaker in uncovering and exploiting critical vulnerabilities within LLM architectures.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Adversarial Attack | COCO (val) | M7.8 | 44 | |
| Mathematical Reasoning | GSM8K (test) | Accuracy0.00e+0 | 29 | |
| Reading Comprehension | DROP (test) | F1 Score0.00e+0 | 29 | |
| Factual Question Answering | TriviaQA (test) | Accuracy0.00e+0 | 29 | |
| Language Modeling | MMLU | MMLU Final Performance0.38 | 12 | |
| Bit-Flip Search | MMLU 1/10th subset (test) | End-to-End Runtime (hours)10 | 6 | |
| Fault Assessment Efficiency | MMLU and MMLU-Pro on LLM Workloads (GPT-2 Large, LLaMA 3.1 8B, DeepSeek-V2 7B) | Coverage84.6 | 5 | |
| Visual Question Answering | VQA v2 | Accuracy0.6 | 4 | |
| Bit-Flip Search | VQA v2 | Runtime (hours)48 | 2 |