Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

From Isolated Islands to Pangea: Unifying Semantic Space for Human Action Understanding

About

Action understanding has attracted long-term attention. It can be formed as the mapping from the physical space to the semantic space. Typically, researchers built datasets according to idiosyncratic choices to define classes and push the envelope of benchmarks respectively. Datasets are incompatible with each other like "Isolated Islands" due to semantic gaps and various class granularities, e.g., do housework in dataset A and wash plate in dataset B. We argue that we need a more principled semantic space to concentrate the community efforts and use all datasets together to pursue generalizable action learning. To this end, we design a structured action semantic space given verb taxonomy hierarchy and covering massive actions. By aligning the classes of previous datasets to our semantic space, we gather (image/video/skeleton/MoCap) datasets into a unified database in a unified label system, i.e., bridging "isolated islands" into a "Pangea". Accordingly, we propose a novel model mapping from the physical space to semantic space to fully use Pangea. In extensive experiments, our new system shows significant superiority, especially in transfer learning. Our code and data will be made public at https://mvig-rhos.com/pangea.

Yong-Lu Li, Xiaoqian Wu, Xinpeng Liu, Zehao Wang, Yiming Dou, Yikun Ji, Junyi Zhang, Yixing Li, Jingru Tan, Xudong Lu, Cewu Lu• 2023

Related benchmarks

TaskDatasetResultRank
Human-Object Interaction DetectionHICO-DET (test)--
493
Action RecognitionHMDB51 (test)--
249
Video Action RecognitionHAA (test)
Top-1 Acc80.87
8
3D Action RecognitionHAA4D
Top-1 Accuracy54.1
6
3D Action RecognitionBABEL-120 (test)
Top-1 Accuracy49.69
6
Verb Node ClassificationPangea (test)
Full mAP34.25
3
Showing 6 of 6 rows

Other info

Follow for update