Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DRetHTR: Linear-Time Decoder-Only Retentive Network for Handwritten Text Recognition

About

State-of-the-art handwritten text recognition (HTR) systems commonly use Transformers, whose growing key-value (KV) cache makes decoding slow and memory-intensive. We introduce DRetHTR, a decoder-only model built on Retentive Networks (RetNet). Compared to an equally sized decoder-only Transformer baseline, DRetHTR delivers 1.6-1.9x faster inference with 38-42% less memory usage, without loss of accuracy. By replacing softmax attention with softmax-free retention and injecting multi-scale sequential priors, DRetHTR avoids a growing KV cache: decoding is linear in output length in both time and memory. To recover the local-to-global inductive bias of attention, we propose layer-wise gamma scaling, which progressively enlarges the effective retention horizon in deeper layers. This encourages early layers to model short-range dependencies and later layers to capture broader context, mitigating the flexibility gap introduced by removing softmax. Consequently, DRetHTR achieves best reported test character error rates of 2.26% (IAM-A, en), 1.81% (RIMES, fr), and 3.46% (Bentham, en), and is competitive on READ-2016 (de) with 4.21%. This demonstrates that decoder-only RetNet enables Transformer-level HTR accuracy with substantially improved decoding speed and memory efficiency.

Changhun Kim, Martin Mayr, Thomas Gorges, Fei Wu, Mathias Seuret, Andreas Maier, Vincent Christlein• 2026

Related benchmarks

TaskDatasetResultRank
Handwritten text recognitionIAM-A (test)
CER (%)2.26
24
Handwritten text recognitionIAM Aachen (test)
CER2.26
23
Handwritten text recognitionREAD 2016 (test)
CER4.21
23
Handwritten text recognitionRIMES (test)
CER1.81
15
Handwritten text recognitionBentham (test)
CER3.46
4
Showing 5 of 5 rows

Other info

Follow for update