Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PhoBERT: Pre-trained language models for Vietnamese

About

We present PhoBERT with two versions, PhoBERT-base and PhoBERT-large, the first public large-scale monolingual language models pre-trained for Vietnamese. Experimental results show that PhoBERT consistently outperforms the recent best pre-trained multilingual model XLM-R (Conneau et al., 2020) and improves the state-of-the-art in multiple Vietnamese-specific NLP tasks including Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference. We release PhoBERT to facilitate future research and downstream applications for Vietnamese NLP. Our PhoBERT models are available at https://github.com/VinAIResearch/PhoBERT

Dat Quoc Nguyen, Anh Tuan Nguyen• 2020

Related benchmarks

TaskDatasetResultRank
Toxic Speech DetectionViCTSD
Acc90.78
9
Hate Speech DetectionViHSD
Acc87.42
9
Machine Reading ComprehensionUIT-ViQuAD 2.0
EM57.27
9
Natural Language InferenceViNLI
Accuracy80.67
9
Hate Spans DetectionViHOS
Accuracy84.92
9
Emotion RecognitionVSMEC
F1 Score65.44
8
Hate Speech DetectionViHOS
F1 Score77.16
8
Part-of-Speech TaggingNIIVTB POS
F1 Score79.36
8
Named Entity RecognitionPhoNER_COVID19 (test)
Micro-F194.5
6
Sentiment AnalysisUIT-VIFSD (test)
F1 Score77.52
6
Showing 10 of 13 rows

Other info

Follow for update