Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MUFASA: A Multi-Layer Framework for Slot Attention

About

Unsupervised object-centric learning (OCL) decomposes visual scenes into distinct entities. Slot attention is a popular approach that represents individual objects as latent vectors, called slots. Current methods obtain these slot representations solely from the last layer of a pre-trained vision transformer (ViT), ignoring valuable, semantically rich information encoded across the other layers. To better utilize this latent semantic information, we introduce MUFASA, a lightweight plug-and-play framework for slot attention-based approaches to unsupervised object segmentation. Our model computes slot attention across multiple feature layers of the ViT encoder, fully leveraging their semantic richness. We propose a fusion strategy to aggregate slots obtained on multiple layers into a unified object-centric representation. Integrating MUFASA into existing OCL methods improves their segmentation results across multiple datasets, setting a new state of the art while simultaneously improving training convergence with only minor inference overhead.

Sebastian Bock, Leonie Sch\"u{\ss}ler, Krishnakant Singh, Simone Schaub-Meyer, Stefan Roth• 2026

Related benchmarks

TaskDatasetResultRank
Unsupervised Object SegmentationCOCO
mBOi34.8
26
Unsupervised Object SegmentationMOVi-C
FG-ARI67.8
18
Unsupervised Object SegmentationPascal
MBO^i0.513
17
Showing 3 of 3 rows

Other info

Follow for update