Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generative Representational Instruction Tuning

About

All text-based language problems can be reduced to either generation or embedding. Current models only perform well at one or the other. We introduce generative representational instruction tuning (GRIT) whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between them through instructions. Compared to other open models, our resulting GritLM 7B sets a new state of the art on the Massive Text Embedding Benchmark (MTEB) and outperforms all models up to its size on a range of generative tasks. By scaling up further, GritLM 8x7B outperforms all open generative language models that we tried while still being among the best embedding models. Notably, we find that GRIT matches training on only generative or embedding data, thus we can unify both at no performance loss. Among other benefits, the unification via GRIT speeds up Retrieval-Augmented Generation (RAG) by > 60% for long documents, by no longer requiring separate retrieval and generation models. Models, code, etc. are freely available at https://github.com/ContextualAI/gritlm.

Niklas Muennighoff, Hongjin Su, Liang Wang, Nan Yang, Furu Wei, Tao Yu, Amanpreet Singh, Douwe Kiela• 2024

Related benchmarks

TaskDatasetResultRank
Multi-hop Question AnsweringHotpotQA (test)
F173.3
198
Multi-hop Question Answering2WikiMultiHopQA (test)--
143
Multi-hop Question AnsweringMuSiQue (test)
F144.8
111
Question AnsweringNQ (test)--
66
RetrievalNatural Questions (test)
Top-5 Recall76.6
62
Long document retrievalLongBench Retrieval v2 (full)
F1 Score0.3799
55
Sentence Embedding EvaluationMTEB (test)
Re-Rank Score60.49
48
Single-document retrievalConditionalQA
F132.58
44
Single-document retrievalNaturalQuestions
F1 Score57.97
44
Single-document retrievalRepLiQA
F1 Score0.8312
44
Showing 10 of 51 rows

Other info

Follow for update