Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adaptive Human Matting for Dynamic Videos

About

The most recent efforts in video matting have focused on eliminating trimap dependency since trimap annotations are expensive and trimap-based methods are less adaptable for real-time applications. Despite the latest tripmap-free methods showing promising results, their performance often degrades when dealing with highly diverse and unstructured videos. We address this limitation by introducing Adaptive Matting for Dynamic Videos, termed AdaM, which is a framework designed for simultaneously differentiating foregrounds from backgrounds and capturing alpha matte details of human subjects in the foreground. Two interconnected network designs are employed to achieve this goal: (1) an encoder-decoder network that produces alpha mattes and intermediate masks which are used to guide the transformer in adaptively decoding foregrounds and backgrounds, and (2) a transformer network in which long- and short-term attention combine to retain spatial and temporal contexts, facilitating the decoding of foreground details. We benchmark and study our methods on recently introduced datasets, showing that our model notably improves matting realism and temporal coherence in complex real-world videos and achieves new best-in-class generalizability. Further details and examples are available at https://github.com/microsoft/AdaM.

Chung-Ching Lin, Jiang Wang, Kun Luo, Kevin Lin, Linjie Li, Lijuan Wang, Zicheng Liu• 2023

Related benchmarks

TaskDatasetResultRank
Video MattingVideoMatte 512 x 288 (test)
MAD5.3
17
Video MattingVideoMatte 1920 x 1080
MAD4.42
13
Video MattingVideoMatte 512 x 288
MAD5.3
13
Video MattingVideoMatte 1920 x 1080 (test)
MAD4.42
9
Video MattingVideoMatte240K (VM) 512x288 (test)
MAD5.3
6
Video MattingVideoMatte240K (VM) 1920x1080 (test)
MAD4.42
5
Showing 6 of 6 rows

Other info

Code

Follow for update