Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SaDiT: Efficient Protein Backbone Design via Latent Structural Tokenization and Diffusion Transformers

About

Generative models for de novo protein backbone design have achieved remarkable success in creating novel protein structures. However, these diffusion-based approaches remain computationally intensive and slower than desired for large-scale structural exploration. While recent efforts like Proteina have introduced flow-matching to improve sampling efficiency, the potential of tokenization for structural compression and acceleration remains largely unexplored in the protein domain. In this work, we present SaDiT, a novel framework that accelerates protein backbone generation by integrating SaProt Tokenization with a Diffusion Transformer (DiT) architecture. SaDiT leverages a discrete latent space to represent protein geometry, significantly reducing the complexity of the generation process while maintaining theoretical SE(3) equivalence. To further enhance efficiency, we introduce an IPA Token Cache mechanism that optimizes the Invariant Point Attention (IPA) layers by reusing computed token states during iterative sampling. Experimental results demonstrate that SaDiT outperforms state-of-the-art models, including RFDiffusion and Proteina, in both computational speed and structural viability. We evaluate our model across unconditional backbone generation and fold-class conditional generation tasks, where SaDiT shows superior ability to capture complex topological features with high designability.

Shentong Mo, Lanqing Li• 2026

Related benchmarks

TaskDatasetResultRank
Unconditional Protein Backbone GenerationPDB & AFDB unconditional generation (test)
Designability0.995
12
Protein backbone generationProtein Backbone Generation Sampling Quality
scTM89
6
Fold class-conditional protein backbone generationCATH fold class-conditional
Designability0.932
3
Showing 3 of 3 rows

Other info

Follow for update