BidirLM: From Text to Omnimodal Bidirectional Encoders by Adapting and Composing Causal LLMs
Abstract
Adapting causal generative language models into bidirectional encoders through systematic ablation and novel merging strategies achieves superior performance across multiple modalities.
Transforming causal generative language models into bidirectional encoders offers a powerful alternative to BERT-style architectures. However, current approaches remain limited: they lack consensus on optimal training objectives, suffer from catastrophic forgetting at scale, and fail to flexibly integrate the vast ecosystem of specialized generative models. In this work, through systematic ablations on the Gemma3 and Qwen3 families, we identify the key factors driving successful adaptation, highlighting the critical role of an often-omitted prior masking phase. To scale this process without original pre-training data, we introduce a dual strategy combining linear weight merging with a lightweight multi-domain data mixture that mitigates catastrophic forgetting. Finally, we augment our encoders by merging them with specialized causal models, seamlessly transferring modality- and domain-specific capabilities. This open-source recipe, designed for any causal decoder LLM, yields BidirLM, a family of five encoders that outperform alternatives on text, vision, and audio representation benchmarks.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- How Do Decoder-Only LLMs Perceive Users? Rethinking Attention Masking for User Representation Learning (2026)
- MrBERT: Modern Multilingual Encoders via Vocabulary, Domain, and Dimensional Adaptation (2026)
- VidVec: Unlocking Video MLLM Embeddings for Video-Text Retrieval (2026)
- LinguDistill: Recovering Linguistic Ability in Vision- Language Models via Selective Cross-Modal Distillation (2026)
- CREM: Compression-Driven Representation Enhancement for Multimodal Retrieval and Comprehension (2026)
- Unified Vision-Language Modeling via Concept Space Alignment (2026)
- HyperTokens: Controlling Token Dynamics for Continual Video-Language Understanding (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.02045 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 9
Browse 9 models citing this paperDatasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
