Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing

About

This paper describes SentencePiece, a language-independent subword tokenizer and detokenizer designed for Neural-based text processing, including Neural Machine Translation. It provides open-source C++ and Python implementations for subword units. While existing subword segmentation tools assume that the input is pre-tokenized into word sequences, SentencePiece can train subword models directly from raw sentences, which allows us to make a purely end-to-end and language independent system. We perform a validation experiment of NMT on English-Japanese machine translation, and find that it is possible to achieve comparable accuracy to direct subword training from raw sentences. We also compare the performance of subword training and segmentation with various configurations. SentencePiece is available under the Apache 2 license at https://github.com/google/sentencepiece.

Taku Kudo, John Richardson• 2018

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)91.1
504
Natural Language UnderstandingGLUE
SST-290.8
452
Text ClassificationAG News (test)
Accuracy92.4
210
Text ClassificationYelp P. (test)
Accuracy93.8
34
Multiclass text classificationMultilingual Amazon Reviews Corpus (test)
Accuracy (Avg)90.8
24
Text ClassificationAverage All Datasets
Accuracy86.5
18
Text ClassificationMASSIVE (test)
Accuracy69.6
18
Sequence ReconstructionGenomic Reads ART simulator 150bp paired-end GRCh38 reference
Reconstruction Rate30.1
9
Taxonomic ClassificationCAMI II metagenome 2017
Taxa F1 Score87.2
9
Variant CallingGIAB HG002 truth set (test)
F1 Score (Variant)83.7
9
Showing 10 of 16 rows

Other info

Follow for update