Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reflection Pretraining Enables Token-Level Self-Correction in Biological Sequence Models

About

Chain-of-Thought (CoT) prompting has significantly advanced task-solving capabilities in natural language processing with large language models. Unlike standard prompting, CoT encourages the model to generate intermediate reasoning steps, non-answer tokens, that help guide the model toward more accurate final outputs. These intermediate steps enable more complex reasoning processes such as error correction, memory management, future planning, and self-reflection. However, applying CoT to non-natural language domains, such as protein and RNA language models, is not yet possible, primarily due to the limited expressiveness of their token spaces (e.g., amino acid tokens). In this work, we propose and define the concept of language expressiveness: the ability of a given language, using its tokens and grammar, to encode information. We show that the limited expressiveness of protein language severely restricts the applicability of CoT-style reasoning. To overcome this, we introduce reflection pretraining, for the first time in a biological sequence model, which enables the model to engage in intermediate reasoning through the generation of auxiliary "thinking tokens" beyond simple answer tokens. Theoretically, we demonstrate that our augmented token set significantly enhances biological language expressiveness, thereby improving the overall reasoning capacity of the model. Experimentally, our pretraining approach teaches protein models to self-correct and leads to substantial performance gains compared to standard pretraining.

Xiang Zhang, Jiaqi Wei, Yuejin Yang, Zijie Qiu, Yuhan Chen, Zhiqiang Gao, Muhammad Abdul-Mageed, Laks V. S. Lakshmanan, Wanli Ouyang, Chenyu You, Siqi Sun• 2025

Related benchmarks

TaskDatasetResultRank
De novo peptide sequencing9-Species (test)
AA Precision (Mouse)0.805
9
De novo peptide sequencingMouse
AA Precision79.2
7
De novo peptide sequencingHuman
AA Precision0.752
7
De novo peptide sequencingYeast
AA Precision80.9
7
De novo peptide sequencingM.mazei
AA Precision79
7
De novo peptide sequencingHoneybee
AA Precision74.4
7
De novo peptide sequencingTOMATO
AA Precision82.2
7
De novo peptide sequencingR.bean
AA Precision81.7
7
De novo peptide sequencingBacillus
AA Precision82.6
7
De novo peptide sequencingC.bacteria
AA Precision73.7
7
Showing 10 of 10 rows

Other info

Follow for update