Fast Vocabulary Transfer for Language Model Compression
About
Real-world business applications require a trade-off between language model performance and size. We propose a new method for model compression that relies on vocabulary transfer. We evaluate the method on various vertical domains and downstream tasks. Our results indicate that vocabulary transfer can be effectively used in combination with other compression techniques, yielding a significant reduction in model size and inference time while marginally compromising on performance.
Leonidas Gee, Andrea Zugarini, Leonardo Rigutini, Paolo Torroni• 2024
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Biomedical domain adaptation | Open Medical-LLM leaderboard | Macro Average66.9 | 84 | |
| Biomedical domain adaptation | Open Medical-LLM leaderboard macro-average | Macro Average Score66.9 | 75 | |
| Definition Generation | Biomedical domain tokens | Similarity Score25.6 | 75 | |
| Definition Generation | Multi-word tokens famous people, places, entities, sayings and concepts | Correctness11.3 | 66 | |
| Multiple-choice tasks | FrenchBench | Accuracy69.6 | 61 |
Showing 5 of 5 rows