Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient OpAmp Adaptation for Zoom Attention to Golden Contexts

About

Large language models (LLMs) have shown significant promise in question-answering (QA) tasks, particularly in retrieval-augmented generation (RAG) scenarios and long-context applications. However, their performance is hindered by noisy reference documents, which often distract from essential information. Despite fine-tuning efforts, Transformer-based architectures struggle to prioritize relevant content. This is evidenced by their tendency to allocate disproportionate attention to irrelevant or later-positioned documents. Recent work proposes the differential attention mechanism to address this issue, but this mechanism is limited by an unsuitable common-mode rejection ratio (CMRR) and high computational costs. Inspired by the operational amplifier (OpAmp), we propose the OpAmp adaptation to address these challenges, which is implemented with adapters efficiently. By integrating the adapter into pre-trained Transformer blocks, our approach enhances focus on the golden context without costly training from scratch. Empirical evaluations on noisy-context benchmarks reveal that our Qwen2.5-OpAmp-72B model, trained with our OpAmp adaptation, surpasses the performance of state-of-the-art LLMs, including DeepSeek-V3 and GPT-4o.

Haoyuan Wu, Rui Ming, Haisheng Zheng, Zhuolun He, Bei Yu• 2025

Related benchmarks

TaskDatasetResultRank
Multi-hop Question AnsweringHotpotQA--
221
Multi-hop ReasoningMuSiQue
EM48
41
Long-context Question AnsweringNarrativeQA
Exact Match61.7
11
Multi-hop ReasoningMultiHopRAG
EM89.6
11
Noisy-RAG Question AnsweringCoQA
Exact Match (EM)92.4
11
Long Dependency Question AnsweringLooGLE--
9
Long-context Question AnsweringLooGLE
EM66.3
6
Showing 7 of 7 rows

Other info

Follow for update