Social Scene Understanding: End-to-End Multi-Person Action Localization and Collective Activity Recognition
About
We present a unified framework for understanding human social behaviors in raw image sequences. Our model jointly detects multiple individuals, infers their social actions, and estimates the collective actions with a single feed-forward pass through a neural network. We propose a single architecture that does not rely on external detection algorithms but rather is trained end-to-end to generate dense proposal maps that are refined via a novel inference scheme. The temporal consistency is handled via a person-level matching Recurrent Neural Network. The complete model takes as input a sequence of frames and outputs detections along with the estimates of individual actions and collective activities. We demonstrate state-of-the-art performance of our algorithm on multiple publicly available benchmarks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Group activity recognition | Volleyball Dataset (VD) (original) | Accuracy90.6 | 79 | |
| Group activity recognition | Volleyball dataset | Accuracy90.6 | 40 | |
| Group activity recognition | Volleyball dataset (test) | MCA89.9 | 37 | |
| Group activity recognition | Collective Activity Dataset | Accuracy89.9 | 25 | |
| Individual Activity Recognition | Volleyball (test) | Accuracy82.4 | 19 | |
| Group activity recognition | Volleyball dataset | MCA89.9 | 19 | |
| Individual Action Recognition | Volleyball dataset | Accuracy82.4 | 18 |