Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

BOSCH: Black-Box Binary Optimization for Short-Context Attention-Head Selection in LLMs

About

Post-training hybridization of large language models (LLMs) often replaces quadratic self-attention with sliding-window attention (SWA) to reduce KV cache usage and improve latency. Existing hybridization schemes are typically defined either at the layer level (e.g., interleaving) or at the head level via static rankings from local to global. Layer-level schemes ignore that local and global dependencies are routed through heads within the same layer, while static head-level rankings suffer from entanglement: a head's local/global behavior can change after hybridization. We propose BOSCH, Black-box Binary Optimization for Short-context Head Selection, a training-free method that formulates the problem as a Large Neighborhood Search and decomposes it into three subproblems: (i) layer-importance detection via small-budget black-box probes, (ii) adaptive per-layer SWA-ratio assignment based on these sensitivities, and (iii) grouped head-level optimization within ratio buckets. Extensive experiments on 4 LLMs ranging from 1.7B to 30B parameters, across 4 SWA ratios, show that BOSCH consistently outperforms layer-level heuristics and 6 strong static head-level methods, with larger gains at higher SWA ratios. Under continual pretraining, BOSCH recover original long-context performance faster and to a higher level. Analysis of the selected heads reveals substantial turnover for BOSCH across different SWA ratios, underscoring the importance of performing head-level selection for each target ratio rather than relying on fixed locality rankings.

Abbas Ghaddar, Ivan Kobyzev, Boxing Chen, Yufei Cui• 2026

Related benchmarks

TaskDatasetResultRank
Long-context language modelingLongBench
Average Score51.5
164
Long-context Language UnderstandingLongBench
Average Score56.2
86
Information RetrievalNIAH (test)
Average Score99.2
59
Long-context retrievalNIAH 64k
Single Score43.7
20
Long-context retrievalNIAH 128k
Single Score18.7
20
Information RetrievalNIAH single v1.0 (test)
Accuracy100
8
Information RetrievalNIAH multikey v1.0 (test)
Accuracy98.7
4
Showing 7 of 7 rows

Other info

Follow for update