Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

StarVQA: Space-Time Attention for Video Quality Assessment

About

The attention mechanism is blooming in computer vision nowadays. However, its application to video quality assessment (VQA) has not been reported. Evaluating the quality of in-the-wild videos is challenging due to the unknown of pristine reference and shooting distortion. This paper presents a novel \underline{s}pace-\underline{t}ime \underline{a}ttention network fo\underline{r} the \underline{VQA} problem, named StarVQA. StarVQA builds a Transformer by alternately concatenating the divided space-time attention. To adapt the Transformer architecture for training, StarVQA designs a vectorized regression loss by encoding the mean opinion score (MOS) to the probability vector and embedding a special vectorized label token as the learnable variable. To capture the long-range spatiotemporal dependencies of a video sequence, StarVQA encodes the space-time position information of each patch to the input of the Transformer. Various experiments are conducted on the de-facto in-the-wild video datasets, including LIVE-VQC, KoNViD-1k, LSVQ, and LSVQ-1080p. Experimental results demonstrate the superiority of the proposed StarVQA over the state-of-the-art. Code and model will be available at: https://github.com/DVL/StarVQA.

Fengchuang Xing, Yuan-Gen Wang, Hanpin Wang, Leida Li, Guopu Zhu• 2021

Related benchmarks

TaskDatasetResultRank
Video Quality AssessmentKoNViD-1k
SROCC0.842
134
Video Quality AssessmentLIVE-VQC
SRCC0.753
64
No-Reference Video Quality AssessmentLIVE-VQC
SRCC0.732
50
No-Reference Video Quality AssessmentKoNViD-1k
SRCC0.812
42
Video Quality AssessmentLIVE-VQC, KoNViD-1k, YouTube-UGC (Weighted Average)
SROCC0.798
23
No-Reference Video Quality AssessmentLSVQ
PLCC0.857
13
Showing 6 of 6 rows

Other info

Follow for update