Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LLMCache: Layer-Wise Caching Strategies for Accelerated Reuse in Transformer Inference

About

Transformer-based language models have achieved remarkable performance across a wide range of tasks, yet their high inference latency poses a significant challenge for real-timeand large-scale deployment. While existing caching mechanisms,such as token-level key-value caches, offer speedups in autore-gressive decoding, they are limited in scope and applicability. In this paper, we present LLMCache, a novel layer-wise caching framework that accelerates transformer inference by reusing intermediate activations based on semantic similarity of input sequences. Unlike prior work, LLMCache is model-agnostic,operates across both encoder and decoder architectures, and supports caching at arbitrary transformer layers. We introduce a lightweight fingerprinting mechanism for matching seman-tically similar inputs and propose adaptive eviction strategies to manage cache staleness. Experiments on BERT and GPT-2 across SQuAD, WikiText-103, and OpenBookQA show up to 3.1 X speedup in inference time with <0.5% accuracy degradation. Our results highlight LLMCache as a practical and general-purpose solution for optimizing transformer inference in real-world applications

Harsh Vardhan Bansal• 2025

Related benchmarks

TaskDatasetResultRank
InferenceWikiText-103, SQuAD v2, and OpenBookQA
Inference Latency (ms)57.9
7
Language ModelingWikiText-103
Accuracy91.9
3
Question AnsweringSQuAD v2
Accuracy86.1
3
Question AnsweringOpenBookQA
Accuracy72.3
3
Showing 4 of 4 rows

Other info

Follow for update