Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Visual Anagrams: Generating Multi-View Optical Illusions with Diffusion Models

About

We address the problem of synthesizing multi-view optical illusions: images that change appearance upon a transformation, such as a flip or rotation. We propose a simple, zero-shot method for obtaining these illusions from off-the-shelf text-to-image diffusion models. During the reverse diffusion process, we estimate the noise from different views of a noisy image, and then combine these noise estimates together and denoise the image. A theoretical analysis suggests that this method works precisely for views that can be written as orthogonal transformations, of which permutations are a subset. This leads to the idea of a visual anagram--an image that changes appearance under some rearrangement of pixels. This includes rotations and flips, but also more exotic pixel permutations such as a jigsaw rearrangement. Our approach also naturally extends to illusions with more than two views. We provide both qualitative and quantitative results demonstrating the effectiveness and flexibility of our method. Please see our project webpage for additional visualizations and results: https://dangeng.github.io/visual_anagrams/

Daniel Geng, Inbum Park, Andrew Owens• 2023

Related benchmarks

TaskDatasetResultRank
Ambiguous Image GenerationDeepFloyd-IF
KID195.3
4
Multi-view Illusion GenerationCIFAR (test)
Metric A0.287
3
Multi-view Illusion GenerationAuthors' Custom Dataset (test)
Metric A27.5
3
Showing 3 of 3 rows

Other info

Code

Follow for update