Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Zero-Shot Personalization of Objects via Textual Inversion

About

Recent advances in text-to-image diffusion models have substantially improved the quality of image customization, enabling the synthesis of highly realistic images. Despite this progress, achieving fast and efficient personalization remains a key challenge, particularly for real-world applications. Existing approaches primarily accelerate customization for human subjects by injecting identity-specific embeddings into diffusion models, but these strategies do not generalize well to arbitrary object categories, limiting their applicability. To address this limitation, we propose a novel framework that employs a learned network to predict object-specific textual inversion embeddings, which are subsequently integrated into the UNet timesteps of a diffusion model for text-conditional customization. This design enables rapid, zero-shot personalization of a wide range of objects in a single forward pass, offering both flexibility and scalability. Extensive experiments across multiple tasks and settings demonstrate the effectiveness of our approach, highlighting its potential to support fast, versatile, and inclusive image customization. To the best of our knowledge, this work represents the first attempt to achieve such general-purpose, training-free personalization within diffusion models, paving the way for future research in personalized image generation.

Aniket Roy, Maitreya Suin, Rama Chellappa• 2026

Related benchmarks

TaskDatasetResultRank
Personalized Image GenerationDreamBooth
CLIP-I Score77
34
Personalized Image GenerationCustom101 (test)
Generation Time (s)2
7
Text-to-Image PersonalizationCustom101
Text Alignment59
4
Showing 3 of 3 rows

Other info

Follow for update