Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CoPEFT: Fast Adaptation Framework for Multi-Agent Collaborative Perception with Parameter-Efficient Fine-Tuning

About

Multi-agent collaborative perception is expected to significantly improve perception performance by overcoming the limitations of single-agent perception through exchanging complementary information. However, training a robust collaborative perception model requires collecting sufficient training data that covers all possible collaboration scenarios, which is impractical due to intolerable deployment costs. Hence, the trained model is not robust against new traffic scenarios with inconsistent data distribution and fundamentally restricts its real-world applicability. Further, existing methods, such as domain adaptation, have mitigated this issue by exposing the deployment data during the training stage but incur a high training cost, which is infeasible for resource-constrained agents. In this paper, we propose a Parameter-Efficient Fine-Tuning-based lightweight framework, CoPEFT, for fast adapting a trained collaborative perception model to new deployment environments under low-cost conditions. CoPEFT develops a Collaboration Adapter and Agent Prompt to perform macro-level and micro-level adaptations separately. Specifically, the Collaboration Adapter utilizes the inherent knowledge from training data and limited deployment data to adapt the feature map to new data distribution. The Agent Prompt further enhances the Collaboration Adapter by inserting fine-grained contextual information about the environment. Extensive experiments demonstrate that our CoPEFT surpasses existing methods with less than 1\% trainable parameters, proving the effectiveness and efficiency of our proposed method.

Quanmin Wei, Penglin Dai, Wei Li, Bingyi Liu, Xiao Wu• 2025

Related benchmarks

TaskDatasetResultRank
Collaborative 3D Object DetectionDAIR-V2X 10% labeled data ratio
AP@5061
10
Collaborative 3D Object DetectionDAIR-V2X 2% labeled data ratio
AP@5051.7
9
Collaborative 3D Object DetectionDAIR-V2X 5% labeled data ratio
AP@5059.6
9
Collaborative 3D Object DetectionDAIR-V2X 20% labeled data ratio
AP@5062.7
9
Collaborative 3D Object DetectionDAIR-V2X 1% labeled data ratio
AP@5050.7
9
3D Object DetectionDAIR-V2X adapted from OPV2V
AP@5055.4
7
3D Object DetectionV2XSet adapted from OPV2V
AP@IoU=50%93.3
7
Showing 7 of 7 rows

Other info

Follow for update