Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

BidirLM: From Text to Omnimodal Bidirectional Encoders by Adapting and Composing Causal LLMs

About

Transforming causal generative language models into bidirectional encoders offers a powerful alternative to BERT-style architectures. However, current approaches remain limited: they lack consensus on optimal training objectives, suffer from catastrophic forgetting at scale, and fail to flexibly integrate the vast ecosystem of specialized generative models. In this work, through systematic ablations on the Gemma3 and Qwen3 families, we identify the key factors driving successful adaptation, highlighting the critical role of an often-omitted prior masking phase. To scale this process without original pre-training data, we introduce a dual strategy combining linear weight merging with a lightweight multi-domain data mixture that mitigates catastrophic forgetting. Finally, we augment our encoders by merging them with specialized causal models, seamlessly transferring modality- and domain-specific capabilities. This open-source recipe, designed for any causal decoder LLM, yields BidirLM, a family of five encoders that outperform alternatives on text, vision, and audio representation benchmarks.

Nicolas Boizard, Th\'eo Deschamps-Berger, Hippolyte Gisserot-Boukhlef, C\'eline Hudelot, Pierre Colombo• 2026

Related benchmarks

TaskDatasetResultRank
Text EmbeddingMTEB Multilingual V2 (test)
Bitext Mining Score72.2
12
Multimodal Information EmbeddingMIEB lite
Mean Rank2
9
Audio Representation EvaluationMAEB beta
Any2Any Retrieval32.8
9
Showing 3 of 3 rows

Other info

Follow for update