Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GTA: A Geometry-Aware Attention Mechanism for Multi-View Transformers

About

As transformers are equivariant to the permutation of input tokens, encoding the positional information of tokens is necessary for many tasks. However, since existing positional encoding schemes have been initially designed for NLP tasks, their suitability for vision tasks, which typically exhibit different structural properties in their data, is questionable. We argue that existing positional encoding schemes are suboptimal for 3D vision tasks, as they do not respect their underlying 3D geometric structure. Based on this hypothesis, we propose a geometry-aware attention mechanism that encodes the geometric structure of tokens as relative transformation determined by the geometric relationship between queries and key-value pairs. By evaluating on multiple novel view synthesis (NVS) datasets in the sparse wide-baseline multi-view setting, we show that our attention, called Geometric Transform Attention (GTA), improves learning efficiency and performance of state-of-the-art transformer-based NVS models without any additional learned parameters and only minor computational overhead.

Takeru Miyato, Bernhard Jaeger, Max Welling, Andreas Geiger• 2023

Related benchmarks

TaskDatasetResultRank
Novel View SynthesisRe10K (test)
PSNR24.38
66
Novel View SynthesisCo3D (test)
PSNR16.5
30
View SynthesisViewBench 30 deg
PSNR17.33
6
View SynthesisViewBench 75 deg
PSNR15.12
6
Novel View SynthesisObjaverse 80K (test)
PSNR21.87
5
Novel View SynthesisCO3D unseen categories 29
PSNR16.99
5
Showing 6 of 6 rows

Other info

Follow for update