Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Collapse-Free Prototype Readout Layer for Transformer Encoders

About

DDCL-Attention is a prototype-based readout layer for transformer encoders that replaces simple pooling methods, such as mean pooling or class tokens, with a learned compression mechanism. It uses a small set of global prototype vectors and assigns tokens to them through soft probabilistic matching, producing compact token summaries at linear complexity in sequence length. The method offers three main advantages. First, it avoids prototype collapse through an exact decomposition of the training loss into a reconstruction term and a diversity term, ensuring that prototypes remain distinct. Second, its joint training with the encoder is shown to be stable under a practical timescale condition, using Tikhonov's singular perturbation theory and explicit learning-rate constraints. Third, the same framework supports three uses: a final readout layer, a differentiable codebook extending VQ-VAE, and a hierarchical document compressor. Experiments on four datasets confirm the theoretical predictions: the loss decomposition holds exactly, prototype separation grows as expected when the stability condition is met, and the codebook reaches full utilization, outperforming standard hard vector quantization. An additional study on orbital debris classification shows that the method also applies beyond standard NLP and vision tasks, including scientific tabular data.

Giansalvo Cirrincione, Rahul Ranjeev Kumar• 2026

Related benchmarks

TaskDatasetResultRank
Document Clustering20 Newsgroups
Accuracy17.5
6
ClusteringSpace debris
Accuracy77.2
3
Text ClusteringSST-2 K=2
Accuracy86.7
3
Text ClusteringIMDB K=4
Accuracy91.3
1
Showing 4 of 4 rows

Other info

Follow for update