UW-VOS: A Large-Scale Dataset for Underwater Video Object Segmentation
About
Underwater Video Object Segmentation (VOS) is essential for marine exploration, yet open-air methods suffer significant degradation due to color distortion, low contrast, and prevalent camouflage. A primary hurdle is the lack of high-quality training data. To bridge this gap, we introduce $\textbf{UW-VOS}$, the first large-scale underwater VOS benchmark comprising 1,431 video sequences across 409 categories with 309,295 mask annotations, constructed via a semi-automatic data engine with rigorous human verification. We further propose $\textbf{SAM-U}$, a parameter-efficient framework that adapts SAM2 to the underwater domain. By inserting lightweight adapters into the image encoder, SAM-U achieves state-of-the-art performance with only $\sim$2$\%$ trainable parameters. Extensive experiments reveal that existing methods experience an average 13-point $\mathcal{J}\&\mathcal{F}$ drop on UW-VOS, while SAM-U effectively bridges this domain gap. Detailed attribute-based analysis further identifies small targets, camouflage, and exit-re-entry as critical bottlenecks, providing a roadmap for future research in robust underwater perception.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Tracking | LaSoT | -- | 411 | |
| Visual Object Tracking | GOT-10k | -- | 254 | |
| Video Object Segmentation | UW-VOS (val) | J&F Score87.4 | 10 | |
| Video Object Tracking | UW-VOS (val) | -- | 9 |