JigsawComm: Joint Semantic Feature Encoding and Transmission for Communication-Efficient Cooperative Perception
About
Multi-agent cooperative perception (CP) promises to overcome the inherent occlusion and range limitations of single-agent systems in autonomous driving, yet its practicality is severely constrained by limited Vehicle-to-Everything (V2X) communication bandwidth. Existing approaches attempt to improve bandwidth efficiency via compression or heuristic message selection, but neglect the semantic relevance and cross-agent redundancy of the transmitted data. In this paper, we formulate a joint semantic feature encoding and transmission problem that maximizes CP accuracy under a communication budget, and introduce JigsawComm, an end-to-end semantic-aware framework that learns to ``assemble the puzzle'' of multi-agent feature transmission. JigsawComm uses a regularized encoder to extract \emph{sparse, semantically relevant features}, and a lightweight Feature Utility Estimator (FUE) to predict each agent's per-cell contribution to the downstream perception task. The FUE-generated compact meta utility maps are exchanged among agents and used to compute an optimal transmission policy under the learned utility proxy. This policy inherently \emph{eliminates cross-agent redundancy}, bounding the feature transmission payload to $\mathcal{O}(1)$ as the number of agents grows, while the meta information overhead remains negligible. The whole pipeline is trained end-to-end through a differentiable scheduling module, informing the FUE to be aligned with the task objective. On the OPV2V and DAIR-V2X benchmarks, JigsawComm reduces total data volume by over 20--500${\times}$ while matching or exceeding the accuracy of state-of-the-art methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 3D Object Detection | OPV2V | AP@0.5092 | 146 | |
| 3D Object Detection | DAIR-V2X | mAP@.569 | 9 |