Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PO-GUISE+: Pose and object guided transformer token selection for efficient driver action recognition

About

We address the task of identifying distracted driving by analyzing in-car videos using efficient transformers. Although transformer models have achieved outstanding performance in human action recognition tasks, their high computational costs limit their application onboard a vehicle. We introduce POGUISE+, a multi-task video transformer that, given an input clip, predicts the distracted driving action, the driver's pose, and the interacting object. Our enhanced features for token selection are specifically adapted to driver actions by leveraging information about object interaction and the driver's pose. With POGUISE+, we significantly reduce the model's computational demands while maintaining or improving baseline accuracy across various computational budgets. Additionally, to evaluate our model's performance in real-world scenarios, we have developed benchmarks on a Jetson computing platform, demonstrating its effectiveness across different configurations and computational budgets. Our model outperforms current state-of-the-art results on the Drive&Act, 100-Driver, and 3MDAD datasets, while having superior efficiency compared to existing video transformer-based methods.

Ricardo Pizarro, Roberto Valle, Rafael Barea, Jose M. Buenaposada, Luis Baumela, Luis Miguel Bergasa• 2024

Related benchmarks

TaskDatasetResultRank
Action RecognitionDrive&Act front-top NIR camera (test)
Macro Accuracy71.52
16
Driver Action Recognition3MDAD averaged 5-fold (test)
Accuracy93.42
8
Driver Action Recognition100-Driver (test)
Accuracy93.54
6
Showing 3 of 3 rows

Other info

Follow for update