Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PerturBench: Benchmarking Machine Learning Models for Cellular Perturbation Analysis

About

We introduce a comprehensive framework for modeling single cell transcriptomic responses to perturbations, aimed at standardizing benchmarking in this rapidly evolving field. Our approach includes a modular and user-friendly model development and evaluation platform, a collection of diverse perturbational datasets, and a set of metrics designed to fairly compare models and dissect their performance. Through extensive evaluation of both published and baseline models across diverse datasets, we highlight the limitations of widely used models, such as mode collapse. We also demonstrate the importance of rank metrics which complement traditional model fit measures, such as RMSE, for validating model effectiveness. Notably, our results show that while no single model architecture clearly outperforms others, simpler architectures are generally competitive and scale well with larger datasets. Overall, this benchmarking exercise sets new standards for model evaluation, supports robust model development, and furthers the use of these models to simulate genetic and chemical screens for therapeutic discovery.

Yan Wu, Esther Wershof, Sebastian M Schmon, Marcel Nassar, B{\l}a\.zej Osi\'nski, Ridvan Eksi, Zichao Yan, Rory Stark, Kun Zhang, Thore Graepel• 2024

Related benchmarks

TaskDatasetResultRank
Perturbation response modellingNorman19
Cosine logFC0.79
20
Perturbation response modelingSrivatsan20
Cosine logFC0.45
20
Perturbation response modellingJiang24
Cosine logFC0.64
19
Combo predictionNorman 19
MMD GEX6.7
14
Covariate transfer taskSrivatsan20 (test)
MMD GEX2.2
14
Showing 5 of 5 rows

Other info

Follow for update