Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Hollowed Net for On-Device Personalization of Text-to-Image Diffusion Models

About

Recent advancements in text-to-image diffusion models have enabled the personalization of these models to generate custom images from textual prompts. This paper presents an efficient LoRA-based personalization approach for on-device subject-driven generation, where pre-trained diffusion models are fine-tuned with user-specific data on resource-constrained devices. Our method, termed Hollowed Net, enhances memory efficiency during fine-tuning by modifying the architecture of a diffusion U-Net to temporarily remove a fraction of its deep layers, creating a hollowed structure. This approach directly addresses on-device memory constraints and substantially reduces GPU memory requirements for training, in contrast to previous methods that primarily focus on minimizing training steps and reducing the number of parameters to update. Additionally, the personalized Hollowed Net can be transferred back into the original U-Net, enabling inference without additional memory overhead. Quantitative and qualitative analyses demonstrate that our approach not only reduces training memory to levels as low as those required for inference but also maintains or improves personalization performance compared to existing methods.

Wonguk Cho, Seokeon Choi, Debasmit Das, Matthias Reisser, Taesup Kim, Sungrack Yun, Fatih Porikli• 2024

Related benchmarks

TaskDatasetResultRank
Personalized Image GenerationCustomConcept101
DINO Score0.6459
16
Text-to-Image GenerationFLUX
Training Memory (GiB)12.23
11
Efficient Fine-tuningSANA
Training Memory (GiB)4.26
11
PersonalizationPersonalization Prompts SANA
DINO Score0.7208
11
PersonalizationPersonalization Prompts FLUX
DINO Score0.4899
11
Showing 5 of 5 rows

Other info

Follow for update