Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FoldToken: Learning Protein Language via Vector Quantization and Beyond

About

Is there a foreign language describing protein sequences and structures simultaneously? Protein structures, represented by continuous 3D points, have long posed a challenge due to the contrasting modeling paradigms of discrete sequences. We introduce \textbf{FoldTokenizer} to represent protein sequence-structure as discrete symbols. This innovative approach involves projecting residue types and structures into a discrete space, guided by a reconstruction loss for information preservation. We refer to the learned discrete symbols as \textbf{FoldToken}, and the sequence of FoldTokens serves as a new protein language, transforming the protein sequence-structure into a unified modality. We apply the created protein language on general backbone inpainting and antibody design tasks, building the first GPT-style model (\textbf{FoldGPT}) for sequence-structure co-generation with promising results. Key to our success is the substantial enhancement of the vector quantization module, Soft Conditional Vector Quantization (\textbf{SoftCVQ}).

Zhangyang Gao, Cheng Tan, Jue Wang, Yufei Huang, Lirong Wu, Stan Z. Li• 2024

Related benchmarks

TaskDatasetResultRank
Protein ReconstructionCATH 512 samples
RMSD1.3
8
Protein ReconstructionAFDB 512 samples
RMSD2.16
8
Protein ReconstructionCAMEO (512 samples)
RMSD2.54
8
Showing 3 of 3 rows

Other info

Follow for update