Perceptual Artifacts Localization for Image Synthesis Tasks
About
Recent advancements in deep generative models have facilitated the creation of photo-realistic images across various tasks. However, these generated images often exhibit perceptual artifacts in specific regions, necessitating manual correction. In this study, we present a comprehensive empirical examination of Perceptual Artifacts Localization (PAL) spanning diverse image synthesis endeavors. We introduce a novel dataset comprising 10,168 generated images, each annotated with per-pixel perceptual artifact labels across ten synthesis tasks. A segmentation model, trained on our proposed dataset, effectively localizes artifacts across a range of tasks. Additionally, we illustrate its proficiency in adapting to previously unseen models using minimal training samples. We further propose an innovative zoom-in inpainting pipeline that seamlessly rectifies perceptual artifacts in the generated images. Through our experimental analyses, we elucidate several practical downstream applications, such as automated artifact rectification, non-referential image quality evaluation, and abnormal region detection in images. The dataset and code are released.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Quality Assessment Correlation | RealEstate10K | PLCC0.094 | 52 | |
| Image Quality Assessment Correlation | Mip-NeRF 360 | PLCC0.03 | 39 | |
| Artifact Detection | Proposed Dataset RLFN | F1 Score0.62 | 28 | |
| Artifact Detection | Proposed Dataset SPAN | F1 Score0.0062 | 28 | |
| Artifact Detection | Proposed Dataset prominent subset | IoU4.63 | 28 | |
| Image Quality Assessment | Tanks&Temples | PLCC0.004 | 26 | |
| Image Quality Assessment Correlation | Tanks&Temples | PLCC0.003 | 26 | |
| Artifact Detection | Proposed Dataset Original HR | F1 Score0.62 | 14 | |
| Artifact Detection | DeSRA MSE-SR | F1-score0.0054 | 14 | |
| Image Quality Assessment | Mip-NeRF 360 | PLCC0.024 | 13 |