Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

P+: Extended Textual Conditioning in Text-to-Image Generation

About

We introduce an Extended Textual Conditioning space in text-to-image models, referred to as $P+$. This space consists of multiple textual conditions, derived from per-layer prompts, each corresponding to a layer of the denoising U-net of the diffusion model. We show that the extended space provides greater disentangling and control over image synthesis. We further introduce Extended Textual Inversion (XTI), where the images are inverted into $P+$, and represented by per-layer tokens. We show that XTI is more expressive and precise, and converges faster than the original Textual Inversion (TI) space. The extended inversion method does not involve any noticeable trade-off between reconstruction and editability and induces more regular inversions. We conduct a series of extensive experiments to analyze and understand the properties of the new space, and to showcase the effectiveness of our method for personalizing text-to-image models. Furthermore, we utilize the unique properties of this space to achieve previously unattainable results in object-style mixing using text-to-image models. Project page: https://prompt-plus.github.io

Andrey Voynov, Qinghao Chu, Daniel Cohen-Or, Kfir Aberman• 2023

Related benchmarks

TaskDatasetResultRank
Multi-Concept Image Generation12-concept dataset
Text Alignment0.643
26
Multi-Concept Image GenerationUser Study
Identity Alignment2.01
4
Multi-Concept Image GenerationMulti-concept generation evaluation set
Accuracy (Avg)58.2
4
Personalized Text-to-Image GenerationSD 1.5
Image Fidelity Score0.273
4
Personalized Text-to-Image GenerationSD base 2.1
Image Fidelity0.238
4
Showing 5 of 5 rows

Other info

Follow for update