Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

WildCat: Near-Linear Attention in Theory and Practice

About

We introduce WildCat, a high-accuracy, low-cost approach to compressing the attention mechanism in neural networks. While attention is a staple of modern network architectures, it is also notoriously expensive to deploy due to resource requirements that scale quadratically with the input sequence length $n$. WildCat avoids these quadratic costs by only attending over a small weighted coreset. Crucially, we select the coreset using a fast but spectrally-accurate subsampling algorithm -- randomly pivoted Cholesky -- and weight the elements optimally to minimise reconstruction error. Remarkably, given bounded inputs, WildCat approximates exact attention with super-polynomial $O(n^{-\sqrt{\log(\log(n))}})$ error decay while running in near-linear $O(n^{1+o(1)})$ time. In contrast, prior practical approximations either lack error guarantees or require quadratic runtime to guarantee such high fidelity. We couple this advance with a GPU-optimized PyTorch implementation and a suite of benchmark experiments demonstrating the benefits of WildCat for image generation, image classification, and language model KV cache compression.

Tobias Schr\"oder, Lester Mackey• 2026

Related benchmarks

TaskDatasetResultRank
Long-context Language UnderstandingLongBench-e
LCC64.85
9
Image ClassificationImageNet-1k (val)
Top-1 Accuracy0.8218
7
Showing 2 of 2 rows

Other info

Follow for update