Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Tunable Soft Equivariance with Guarantees

About

Equivariance is a fundamental property in computer vision models, yet strict equivariance is rarely satisfied in real-world data, which can limit a model's performance. Controlling the degree of equivariance is therefore desirable. We propose a general framework for constructing soft equivariant models by projecting the model weights into a designed subspace. The method applies to any pre-trained architecture and provides theoretical bounds on the induced equivariance error. Empirically, we demonstrate the effectiveness of our method on multiple pre-trained backbones, including ViT and ResNet, across image classification, semantic segmentation, and human-trajectory prediction tasks. Notably, our approach improves the performance while simultaneously reducing equivariance error on the competitive ImageNet benchmark.

Md Ashiqur Rahman, Lim Jun Hao, Jeremiah Jiang, Teck-Yian Lim, Raymond A. Yeh• 2026

Related benchmarks

TaskDatasetResultRank
Trajectory PredictionETH UCY (test)--
72
Trajectory PredictionHotel ETH-UCY (test)
ADE5.69
58
Trajectory PredictionUNIV ETH-UCY (test)
ADE7.85
44
RegressionSynthetic O(5)
Relative MSE (10^-1)0.72
12
Image ClassificationCIFAR10 (test)
Accuracy (Acc)98.82
9
Image ClassificationCIFAR100 (test)
Accuracy (Acc)91.03
9
Semantic segmentationPASCAL VOC 15 (test)
mIoU89.48
9
Human Trajectory PredictionZARA1 ETH-UCY (test)
cADE3.4
3
Human Trajectory PredictionZARA2 ETH/UCY (test)
cADE2.91
3
Showing 9 of 9 rows

Other info

GitHub

Follow for update