Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks

About

Many machine learning tasks such as multiple instance learning, 3D shape recognition, and few-shot image classification are defined on sets of instances. Since solutions to such problems do not depend on the order of elements of the set, models used to address them should be permutation invariant. We present an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set. The model consists of an encoder and a decoder, both of which rely on attention mechanisms. In an effort to reduce computational complexity, we introduce an attention scheme inspired by inducing point methods from sparse Gaussian process literature. It reduces the computation time of self-attention from quadratic to linear in the number of elements in the set. We show that our model is theoretically attractive and we evaluate it on a range of tasks, demonstrating the state-of-the-art performance compared to recent methods for set-structured data.

Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Kosiorek, Seungjin Choi, Yee Whye Teh• 2018

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy40.19
3518
Image ClassificationCIFAR-10 (test)
Accuracy73.42
3381
Image ClassificationImageNet (val)
Top-1 Acc74.6
1206
Shape classificationModelNet40 (test)
OA90.4
255
Point Cloud ClassificationModelNet40 (test)
Accuracy90.4
229
Graph ClassificationMUTAG (10-fold cross-validation)
Accuracy87.71
219
Graph ClassificationPROTEINS (10-fold cross-validation)
Accuracy59.62
214
Graph ClassificationIMDB-B (10-fold cross-validation)
Accuracy71.21
148
Graph ClassificationIMDB-M (10-fold cross-validation)
Accuracy50.25
84
Short-term wildfire danger forecastingMesogeos
F1 Score73.1
40
Showing 10 of 52 rows

Other info

Follow for update