Structural Information Guided Multimodal Pre-training for Vehicle-centric Perception
About
Understanding vehicles in images is important for various applications such as intelligent transportation and self-driving system. Existing vehicle-centric works typically pre-train models on large-scale classification datasets and then fine-tune them for specific downstream tasks. However, they neglect the specific characteristics of vehicle perception in different tasks and might thus lead to sub-optimal performance. To address this issue, we propose a novel vehicle-centric pre-training framework called VehicleMAE, which incorporates the structural information including the spatial structure from vehicle profile information and the semantic structure from informative high-level natural language descriptions for effective masked vehicle appearance reconstruction. To be specific, we explicitly extract the sketch lines of vehicles as a form of the spatial structure to guide vehicle reconstruction. The more comprehensive knowledge distilled from the CLIP big model based on the similarity between the paired/unpaired vehicle image-text sample is further taken into consideration to help achieve a better understanding of vehicles. A large-scale dataset is built to pre-train our model, termed Autobot1M, which contains about 1M vehicle images and 12693 text information. Extensive experiments on four vehicle-based downstream tasks fully validated the effectiveness of our VehicleMAE. The source code and pre-trained models will be released at https://github.com/Event-AHU/VehicleMAE.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Vehicle Attribute Recognition | VAR dataset | mA92.21 | 10 | |
| Vehicle Detection | V-Det | AP (0.5:0.95)46.9 | 10 | |
| Vehicle Part Segmentation | VPS dataset | mIoU73.29 | 10 | |
| Vehicle Re-identification | V-Reid dataset | mAP85.6 | 10 | |
| Vehicle Fine-grained Recognition | VFR dataset | Accuracy94.5 | 9 |