Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Text2Shape: Generating Shapes from Natural Language by Learning Joint Embeddings

About

We present a method for generating colored 3D shapes from natural language. To this end, we first learn joint embeddings of freeform text descriptions and colored 3D shapes. Our model combines and extends learning by association and metric learning approaches to learn implicit cross-modal connections, and produces a joint representation that captures the many-to-many relations between language and physical properties of 3D shapes such as color and shape. To evaluate our approach, we collect a large dataset of natural language descriptions for physical 3D objects in the ShapeNet dataset. With this learned joint embedding we demonstrate text-to-shape retrieval that outperforms baseline approaches. Using our embeddings with a novel conditional Wasserstein GAN framework, we generate colored 3D shapes from text. Our method is the first to connect natural language text with realistic 3D objects exhibiting rich variations in color, texture, and shape detail. See video at https://youtu.be/zraPvRdl13Q

Kevin Chen, Christopher B. Choy, Manolis Savva, Angel X. Chang, Thomas Funkhouser, Silvio Savarese• 2018

Related benchmarks

TaskDatasetResultRank
Text-to-Shape retrievalText2Shape (test)
RR@10.4
15
Shape-to-Text retrievalText2Shape (test)
RR@10.83
8
Language-guided 3D shape generationShapeNet (test)
P(Tr)0.15
7
Text-conditioned 3D shape generationText2Shape (original)
CLIP-S16.29
4
Text-guided 3D Shape GenerationShapeNet
IoU9.64
2
Showing 5 of 5 rows

Other info

Follow for update