Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TroL: Traversal of Layers for Large Language and Vision Models

About

Large language and vision models (LLVMs) have been driven by the generalization power of large language models (LLMs) and the advent of visual instruction tuning. Along with scaling them up directly, these models enable LLVMs to showcase powerful vision language (VL) performances by covering diverse tasks via natural language instructions. However, existing open-source LLVMs that perform comparably to closed-source LLVMs such as GPT-4V are often considered too large (e.g., 26B, 34B, and 110B parameters), having a larger number of layers. These large models demand costly, high-end resources for both training and inference. To address this issue, we present a new efficient LLVM family with 1.8B, 3.8B, and 7B LLM model sizes, Traversal of Layers (TroL), which enables the reuse of layers in a token-wise manner. This layer traversing technique simulates the effect of looking back and retracing the answering stream while increasing the number of forward propagation layers without physically adding more layers. We demonstrate that TroL employs a simple layer traversing approach yet efficiently outperforms the open-source LLVMs with larger model sizes and rivals the performances of the closed-source LLVMs with substantial sizes.

Byung-Kwan Lee, Sangyun Chung, Chae Won Kim, Beomchan Park, Yong Man Ro• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy88.6
1455
Multimodal EvaluationMME
Score2.31e+3
658
Multimodal UnderstandingMMBench
Accuracy83.5
637
Multimodal UnderstandingMM-Vet
MM-Vet Score54.7
531
Science Question AnsweringScienceQA--
502
Multimodal UnderstandingMMMU
Accuracy49.9
437
Multimodal ReasoningMM-Vet
MM-Vet Score54.7
431
Mathematical ReasoningMathVista
Score55.1
385
Visual Question AnsweringChartQA
Accuracy73.8
371
Chart Question AnsweringChartQA
Accuracy73.8
356
Showing 10 of 31 rows

Other info

Code

Follow for update