Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Evolution of meta's llama models and parameter-efficient fine-tuning of large language models: a survey

About

This review surveys the rapid evolution of Meta AI's LLaMA (Large Language Model Meta AI) series - from LLaMA 1 through LLaMA 4 and the specialized parameter-efficient fine-tuning (PEFT) methods developed for these models. We first describe the LLaMA family of foundation models (7B-65B to 288B parameters), their architectures (including native multimodal and Mixtureof-Experts variants), and key performance characteristics. We then describe and discuss the concept of PEFT, which adapts large pre-trained models by updating only a small subset of parameters, and review five PEFT methods that have been applied to LLaMA: LoRA (Low-Rank Adaptation), LLaMA-Adapter V1 and V2, LLaMA-Excitor, and QLoRA (Quantized LoRA). We discuss each method's mechanism, parameter savings, and example application to LLaMA (e.g., instruction tuning, multimodal tasks). We provide structured discussion and analysis of model and adapter architectures, parameter counts, and benchmark results (including examples where fine-tuned LLaMA models outperform larger baselines). Finally, we examine real-world use cases where LLaMA-based models and PEFT have been successfully applied (e.g., legal and medical domains), and we discuss ongoing challenges and future research directions (such as scaling to even larger contexts and improving robustness). This survey paper provides a one-stop resource for ML researchers and practitioners interested in LLaMA models and efficient fine-tuning strategies.

Abdulhady Abas Abdullah, Arkaitz Zubiaga, Seyedali Mirjalili, Amir H. Gandomi, Fatemeh Daneshfar, Mohammadsadra Amini, Alan Salam Mohammed, Hadi Veisi• 2025

Related benchmarks

TaskDatasetResultRank
CodeGRAFITE Sample Code
Pass Rate100
4
Factual AnalysisGRAFITE Sample Factual
Pass Rate29.4
4
Instruction FollowingGRAFITE Sample Instruction Following
Pass Rate58.3
4
MathematicsGRAFITE Math Sample
Pass Rate60
4
Multi-domain evaluationGRAFITE Sample Dataset (Total)
Pass Rate63.2
4
Table ProcessingGRAFITE Sample Dataset Table
Pass Rate54.5
4
MultilingualGRAFITE Multilingual Sample Dataset
Pass Rate40
4
ReasoningGRAFITE Sample Reasoning
Pass Rate33.3
4
SummarizationGRAFITE Sample
Pass Rate83.3
4
Creative WritingGRAFITE Sample Dataset Creative
Pass Rate83.3
4
Showing 10 of 11 rows

Other info

Follow for update