Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

S2R-HDR: A Large-Scale Rendered Dataset for HDR Fusion

About

The generalization of learning-based high dynamic range (HDR) fusion is often limited by the availability of training data, as collecting large-scale HDR images from dynamic scenes is both costly and technically challenging. To address these challenges, we propose S2R-HDR, the first large-scale high-quality synthetic dataset for HDR fusion, with 24,000 HDR samples. Using Unreal Engine 5, we design a diverse set of realistic HDR scenes that encompass various dynamic elements, motion types, high dynamic range scenes, and lighting. Additionally, we develop an efficient rendering pipeline to generate realistic HDR images. To further mitigate the domain gap between synthetic and real-world data, we introduce S2R-Adapter, a domain adaptation designed to bridge this gap and enhance the generalization ability of models. Experimental results on real-world datasets demonstrate that our approach achieves state-of-the-art HDR fusion performance. Dataset and code are available at https://openimaginglab.github.io/S2R-HDR.

Yujin Wang, Jiarui Wu, Yichen Bian, Fan Zhang, Tianfan Xue• 2025

Related benchmarks

TaskDatasetResultRank
HDR ImagingChallenge123 (test)
PSNR-µ43.43
17
High Dynamic Range ImagingSCT 1.0 (test)
PSNR (µ)43.33
9
HDR ImagingSCT (test)
PSNR (µ)36.28
8
Showing 3 of 3 rows

Other info

Follow for update