Our new X account is live! Follow @wizwand_team for updates
Home
/
Benchmarks
Training Efficiency on Large Language Model Pre-training
Loading...
46.2
Model FLOPs Utilization
PaLM
20.304
27.027
33.75
40.473
Apr 5, 2022
Model FLOPs Utilization
Hardware FLOPs Utilization
Updated 3d ago
Evaluation Results
Method
Method
Links
Model FLOPs Utilization
Hardware FLOPs Utilization
PaLM
# of Parameters (in bi...
2022.04
46.2
57.8
Gopher
# of Parameters (in bi...
2022.04
32.5
-
Megatron-Turing NLG
# of Parameters (in bi...
2022.04
30.2
-
GPT-3
# of Parameters (in bi...
2022.04
21.3
-
Feedback
Search any
task
Search any
task