Refining Corpora from a Model Calibration Perspective for Chinese Spelling Correction
About
Chinese Spelling Correction (CSC) commonly lacks large-scale high-quality corpora, due to the labor-intensive labeling of spelling errors in real-life human writing or typing scenarios. Two data augmentation methods are widely adopted: (1) \textit{Random Replacement} with the guidance of confusion sets and (2) \textit{OCR/ASR-based Generation} that simulates character misusing. However, both methods inevitably introduce noisy data (e.g., false spelling errors), potentially leading to over-correction. By carefully analyzing the two types of corpora, we find that though the latter achieves more robust generalization performance, the former yields better-calibrated CSC models. We then provide a theoretical analysis of this empirical observation, based on which a corpus refining strategy is proposed. Specifically, OCR/ASR-based data samples are fed into a well-calibrated CSC model trained on random replacement-based corpora and then filtered based on prediction confidence. By learning a simple BERT-based model on the refined OCR/ASR-based corpus, we set up impressive state-of-the-art performance on three widely-used benchmarks, while significantly alleviating over-correction (e.g., lowering false positive predictions).
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Chinese Spelling Check | SIGHAN14 (test) | -- | 28 | |
| Chinese Spelling Check | SIGHAN13 Sentence level (test) | Precision85.7 | 12 | |
| Chinese Spelling Check | SIGHAN15 Sentence level (test) | Precision74.1 | 12 |