Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DiffPlace: Street View Generation via Place-Controllable Diffusion Model Enhancing Place Recognition

About

Generative models have advanced significantly in realistic image synthesis, with diffusion models excelling in quality and stability. Recent multi-view diffusion models improve 3D-aware street view generation, but they struggle to produce place-aware and background-consistent urban scenes from text, BEV maps, and object bounding boxes. This limits their effectiveness in generating realistic samples for place recognition tasks. To address these challenges, we propose DiffPlace, a novel framework that introduces a place-ID controller to enable place-controllable multi-view image generation. The place-ID controller employs linear projection, perceiver transformer, and contrastive learning to map place-ID embeddings into a fixed CLIP space, allowing the model to synthesize images with consistent background buildings while flexibly modifying foreground objects and weather conditions. Extensive experiments, including quantitative comparisons and augmented training evaluations, demonstrate that DiffPlace outperforms existing methods in both generation quality and training support for visual place recognition. Our results highlight the potential of generative models in enhancing scene-level and place-aware synthesis, providing a valuable approach for improving place recognition in autonomous driving

Ji Li, Zhiwei Li, Shihao Li, Zhenjiang Yu, Boyang Wang, Haiou Liu• 2026

Related benchmarks

TaskDatasetResultRank
3D Object DetectionnuScenes (val)
NDS70.58
941
Visual Place RecognitionPittsburgh30k (test)
Recall@192.9
86
Driving Scene GenerationnuScenes (val)
FID13.4
9
Place RecognitionnuScenes (val)
AR@157.6
4
Showing 4 of 4 rows

Other info

Follow for update