Evolution of meta's llama models and parameter-efficient fine-tuning of large language models: a survey
About
This review surveys the rapid evolution of Meta AI's LLaMA (Large Language Model Meta AI) series - from LLaMA 1 through LLaMA 4 and the specialized parameter-efficient fine-tuning (PEFT) methods developed for these models. We first describe the LLaMA family of foundation models (7B-65B to 288B parameters), their architectures (including native multimodal and Mixtureof-Experts variants), and key performance characteristics. We then describe and discuss the concept of PEFT, which adapts large pre-trained models by updating only a small subset of parameters, and review five PEFT methods that have been applied to LLaMA: LoRA (Low-Rank Adaptation), LLaMA-Adapter V1 and V2, LLaMA-Excitor, and QLoRA (Quantized LoRA). We discuss each method's mechanism, parameter savings, and example application to LLaMA (e.g., instruction tuning, multimodal tasks). We provide structured discussion and analysis of model and adapter architectures, parameter counts, and benchmark results (including examples where fine-tuned LLaMA models outperform larger baselines). Finally, we examine real-world use cases where LLaMA-based models and PEFT have been successfully applied (e.g., legal and medical domains), and we discuss ongoing challenges and future research directions (such as scaling to even larger contexts and improving robustness). This survey paper provides a one-stop resource for ML researchers and practitioners interested in LLaMA models and efficient fine-tuning strategies.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Code | GRAFITE Sample Code | Pass Rate100 | 4 | |
| Factual Analysis | GRAFITE Sample Factual | Pass Rate29.4 | 4 | |
| Instruction Following | GRAFITE Sample Instruction Following | Pass Rate58.3 | 4 | |
| Mathematics | GRAFITE Math Sample | Pass Rate60 | 4 | |
| Multi-domain evaluation | GRAFITE Sample Dataset (Total) | Pass Rate63.2 | 4 | |
| Table Processing | GRAFITE Sample Dataset Table | Pass Rate54.5 | 4 | |
| Multilingual | GRAFITE Multilingual Sample Dataset | Pass Rate40 | 4 | |
| Reasoning | GRAFITE Sample Reasoning | Pass Rate33.3 | 4 | |
| Summarization | GRAFITE Sample | Pass Rate83.3 | 4 | |
| Creative Writing | GRAFITE Sample Dataset Creative | Pass Rate83.3 | 4 |