OmniGen: Unified Multimodal Sensor Generation for Autonomous Driving
About
Autonomous driving has seen remarkable advancements, largely driven by extensive real-world data collection. However, acquiring diverse and corner-case data remains costly and inefficient. Generative models have emerged as a promising solution by synthesizing realistic sensor data. However, existing approaches primarily focus on single-modality generation, leading to inefficiencies and misalignment in multimodal sensor data. To address these challenges, we propose OminiGen, which generates aligned multimodal sensor data in a unified framework. Our approach leverages a shared Bird\u2019s Eye View (BEV) space to unify multimodal features and designs a novel generalizable multimodal reconstruction method, UAE, to jointly decode LiDAR and multi-view camera data. UAE achieves multimodal sensor decoding through volume rendering, enabling accurate and flexible reconstruction. Furthermore, we incorporate a Diffusion Transformer (DiT) with a ControlNet branch to enable controllable multimodal sensor generation. Our comprehensive experiments demonstrate that OminiGen achieves desired performances in unified multimodal sensor data generation with multimodal consistency and flexible sensor adjustments.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Camera Generation | nuScenes v1.0-trainval (val) | FID21.01 | 11 | |
| RGB Reconstruction | nuScenes (val) | PSNR30.21 | 10 | |
| Camera Generation | nuScenes (val) | FID21.01 | 10 | |
| LiDAR Generation | nuScenes v1.0-trainval (val) | MMD2.94 | 6 | |
| LiDAR Generation | nuScenes (val) | MMD2.94 | 6 | |
| Camera Reconstruction | nuScenes (train) | PSNR30.45 | 5 | |
| LiDAR Reconstruction | nuScenes (train) | Chamfer Distance0.634 | 2 | |
| LiDAR Reconstruction | nuScenes (val) | Chamfer Distance0.793 | 2 |