Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Supervised and Semi-Supervised Text Categorization using LSTM for Region Embeddings

About

One-hot CNN (convolutional neural network) has been shown to be effective for text categorization (Johnson & Zhang, 2015). We view it as a special case of a general framework which jointly trains a linear model with a non-linear feature generator consisting of `text region embedding + pooling'. Under this framework, we explore a more sophisticated region embedding method using Long Short-Term Memory (LSTM). LSTM can embed text regions of variable (and possibly large) sizes, whereas the region size needs to be fixed in a CNN. We seek effective and efficient use of LSTM for this purpose in the supervised and semi-supervised settings. The best results were obtained by combining region embeddings in the form of LSTM and convolution layers trained on unlabeled data. The results indicate that on this task, embeddings of text regions, which can convey complex concepts, are more useful than embeddings of single words in isolation. We report performances exceeding the previous best results on four benchmark datasets.

Rie Johnson, Tong Zhang• 2016

Related benchmarks

TaskDatasetResultRank
Sentiment AnalysisIMDB (test)
Accuracy94.1
248
Text ClassificationAG News (test)--
210
Sentiment ClassificationIMDB (test)
Error Rate5.9
144
Topic ClassificationDBPedia (test)--
64
Text ClassificationDBPedia (test)
Test Error Rate0.0084
40
Text CategorizationRCV1 (test)
Error Rate0.0924
24
Text ClassificationYelp Full (test)
Test Error Rate32.39
20
Text CategorizationElec (test)
Error Rate5.55
16
Binary Sentiment ClassificationACL-IMDB (test)
Error Rate5.94
12
Fine-grained Sentiment ClassificationIMDB (test)
Error Rate (%)37.56
9
Showing 10 of 14 rows

Other info

Code

Follow for update