Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reducing SO(3) Convolutions to SO(2) for Efficient Equivariant GNNs

About

Graph neural networks that model 3D data, such as point clouds or atoms, are typically desired to be $SO(3)$ equivariant, i.e., equivariant to 3D rotations. Unfortunately equivariant convolutions, which are a fundamental operation for equivariant networks, increase significantly in computational complexity as higher-order tensors are used. In this paper, we address this issue by reducing the $SO(3)$ convolutions or tensor products to mathematically equivalent convolutions in $SO(2)$ . This is accomplished by aligning the node embeddings' primary axis with the edge vectors, which sparsifies the tensor product and reduces the computational complexity from $O(L^6)$ to $O(L^3)$, where $L$ is the degree of the representation. We demonstrate the potential implications of this improvement by proposing the Equivariant Spherical Channel Network (eSCN), a graph neural network utilizing our novel approach to equivariant convolutions, which achieves state-of-the-art results on the large-scale OC-20 and OC-22 datasets.

Saro Passaro, C. Lawrence Zitnick• 2023

Related benchmarks

TaskDatasetResultRank
Initial Structure to Relaxed Structure (IS2RS)Open Catalyst OC20 (test)
AFbT0.485
32
S2EF (Structure to Energy and Forces)OC20 average across all four splits (val)
Force MAE (meV/Å)17.1
30
Initial Structure to Relaxed EnergyOC20 IS2RE (test)
Energy MAE (meV)323
15
Structure to Energy and ForcesOC20 S2EF (test)
Energy MAE (meV)228
12
S2EF-TotalOC22 S2EF-Total ID (val)
Energy MAE (meV)350
6
S2EF-TotalOC22 S2EF-Total OOD (val)
Energy MAE (meV)789
6
Showing 6 of 6 rows

Other info

Follow for update