Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

How to make the most of your masked language model for protein engineering

About

A plethora of protein language models have been released in recent years. Yet comparatively little work has addressed how to best sample from them to optimize desired biological properties. We fill this gap by proposing a flexible, effective sampling method for masked language models (MLMs), and by systematically evaluating models and methods both in silico and in vitro on actual antibody therapeutics campaigns. Firstly, we propose sampling with stochastic beam search, exploiting the fact that MLMs are remarkably efficient at evaluating the pseudo-perplexity of the entire 1-edit neighborhood of a sequence. Reframing generation in terms of entire-sequence evaluation enables flexible guidance with multiple optimization objectives. Secondly, we report results from our extensive in vitro head-to-head evaluation for the antibody engineering setting. This reveals that choice of sampling method is at least as impactful as the model used, motivating future research into this under-explored area.

Calvin McCarter, Nick Bhattacharya, Sebastian W. Ober, Hunter Elliott• 2026

Related benchmarks

TaskDatasetResultRank
Protein sequence preference optimizationPbrR
Hypervolume14.08
12
Protein sequence preference optimizationα-Amylase
Hypervolume125.3
12
Protein sequence preference optimizationDHFR
Hypervolume50.8
12
Showing 3 of 3 rows

Other info

Follow for update