Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BitHydra: Towards Bit-flip Inference Cost Attack against Large Language Models

About

Large language models (LLMs) are widely deployed, but their substantial compute demands make them vulnerable to inference cost attacks that aim to deliberately maximize the output length. In this work, we investigate a distinct attack surface: maximizing inference cost by tampering with the model parameters instead of inputs. This approach leverages the established capability of Bit-Flip Attacks (BFAs) to persistently alter model behavior via minute weight perturbations, effectively decoupling the attack from specific input queries. To realize this, we propose BitHydra, a framework that addresses the unique optimization challenge of identifying the exact weight bits that maximize generation cost. We formulate the attack as a constrained Binary Integer Programming (BIP) problem designed to systematically suppress the end-of-sequence (i.e., <eos>) probability. To overcome the intractability of the discrete search space, we relax the problem into a continuous optimization task and solve it via the Alternating Direction Method of Multipliers (ADMM). We evaluate BitHydra across 10 LLMs (1.5B-16B). Our results demonstrate that the proposed optimization method efficiently achieves endless generation with as few as 1-4 bit flips on all testing models, verifying the effectiveness of the ADMM-based formulation against both standard models and potential defenses.

Xiaobei Yan, Yiming Li, Hao Wang, Han Qiu, Tianwei Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Bit-flip Inference Cost AttackAlpaca (test)
Avg Length (Original)1.12e+3
10
Inference Cost AttackAlpaca Samantha-7B (test)
Average Length1.94e+3
6
Inference Cost AttackAlpaca Vicuna-7B (test)
Average Length1.87e+3
6
Inference Cost AttackAlpaca Llama2-7B (test)
Average Length2.01e+3
6
Showing 4 of 4 rows

Other info

Follow for update