FiNER: Financial Numeric Entity Recognition for XBRL Tagging
About
Publicly traded companies are required to submit periodic reports with eXtensive Business Reporting Language (XBRL) word-level tags. Manually tagging the reports is tedious and costly. We, therefore, introduce XBRL tagging as a new entity extraction task for the financial domain and release FiNER-139, a dataset of 1.1M sentences with gold XBRL tags. Unlike typical entity extraction datasets, FiNER-139 uses a much larger label set of 139 entity types. Most annotated tokens are numeric, with the correct tag per token depending mostly on context, rather than the token itself. We show that subword fragmentation of numeric expressions harms BERT's performance, allowing word-level BILSTMs to perform better. To improve BERT's performance, we propose two simple and effective solutions that replace numeric expressions with pseudo-tokens reflecting original token shapes and numeric magnitudes. We also experiment with FIN-BERT, an existing BERT model for the financial domain, and release our own BERT (SEC-BERT), pre-trained on financial filings, which performs best. Through data and error analysis, we finally identify possible limitations to inspire future work on XBRL tagging.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Named Entity Recognition | NER | -- | 40 | |
| Numerical Question Answering | FinQA (test) | Execution Accuracy66.02 | 33 | |
| Sentiment Analysis | FOMC | -- | 26 | |
| Financial Reasoning | FinQA | -- | 19 | |
| XBRL tagging | FiNER-139 1.0 (dev) | μ-Precision84.8 | 10 | |
| XBRL tagging | FiNER-139 1.0 (test) | Micro Precision81 | 10 | |
| Financial Entity Recognition | FiNER | F1 Score82.35 | 9 | |
| Question Answering | FinQA | Prog Acc53.18 | 9 | |
| Classification | Headline | F1 Score90.52 | 9 | |
| Sentiment Analysis | Financial PhraseBank (FPB) | Accuracy84.37 | 9 |