Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks

About

We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point clouds and graphs, which is equivariant under continuous 3D roto-translations. Equivariance is important to ensure stable and predictable performance in the presence of nuisance transformations of the data input. A positive corollary of equivariance is increased weight-tying within the model. The SE(3)-Transformer leverages the benefits of self-attention to operate on large point clouds and graphs with varying number of points, while guaranteeing SE(3)-equivariance for robustness. We evaluate our model on a toy N-body particle simulation dataset, showcasing the robustness of the predictions under rotations of the input. We further achieve competitive performance on two real-world datasets, ScanObjectNN and QM9. In all cases, our model outperforms a strong, non-equivariant attention baseline and an equivariant model without attention.

Fabian B. Fuchs, Daniel E. Worrall, Volker Fischer, Max Welling• 2020

Related benchmarks

TaskDatasetResultRank
Molecular property predictionQM9 (test)
mu0.051
174
Molecular property predictionQM9
Cv0.054
70
Atomic force predictionMD17 (test)--
22
Dynamics PredictionN-body 500 (train)
Prediction Error (1,2,0)5.54
13
Dynamics PredictionN-body 1500 (train)
Prediction Error (1,2,0)5.02
13
Motion Capture PredictionMotion Capture (test)
Prediction Error60.9
12
Aptamer ScreeningGFP
Top-10 Precision0.2733
12
Property PredictionQM9 random (test)
alpha (bohr^3)0.142
11
Aptamer ScreeningHNRNPC
Top-10 Precision10
10
Future state predictionM-complex Single System (5, 10)
MSE (x10^-2)24.48
10
Showing 10 of 31 rows

Other info

Follow for update