Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SEGA: Instructing Text-to-Image Models using Semantic Guidance

About

Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user's intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) generalizes to any generative architecture using classifier-free guidance. More importantly, it allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA's effectiveness on both latent and pixel-based diffusion models such as Stable Diffusion, Paella, and DeepFloyd-IF using a variety of tasks, thus providing strong evidence for its versatility, flexibility, and improvements over existing methods.

Manuel Brack, Felix Friedrich, Dominik Hintersdorf, Lukas Struppek, Patrick Schramowski, Kristian Kersting• 2023

Related benchmarks

TaskDatasetResultRank
Generation QualityCOCO 1K
VQA Score70
13
Erase EffectivenessI2P sexual 1.0 (test)
Total Erased Count155
13
Nudity Concept ErasureMMA Adversarial Prompts
Erase Rate (%)65.41
13
Nudity Concept ErasureRing-a-bell Adversarial Prompts
Erase Rate (%)66.54
13
Image ManipulationUser Study
Multi-Conditioning80
4
Showing 5 of 5 rows

Other info

Code

Follow for update