TopicGPT: A Prompt-based Topic Modeling Framework
About
Topic modeling is a well-established technique for exploring text corpora. Conventional topic models (e.g., LDA) represent topics as bags of words that often require "reading the tea leaves" to interpret; additionally, they offer users minimal control over the formatting and specificity of resulting topics. To tackle these issues, we introduce TopicGPT, a prompt-based framework that uses large language models (LLMs) to uncover latent topics in a text collection. TopicGPT produces topics that align better with human categorizations compared to competing methods: it achieves a harmonic mean purity of 0.74 against human-annotated Wikipedia topics compared to 0.64 for the strongest baseline. Its topics are also interpretable, dispensing with ambiguous bags of words in favor of topics with natural language labels and associated free-form descriptions. Moreover, the framework is highly adaptable, allowing users to specify constraints and modify topics without the need for model retraining. By streamlining access to high-quality and interpretable topics, TopicGPT represents a compelling, human-centered approach to topic modeling.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Passage-pair linking | Literary analysis corpus (Goodreads, Sparknotes, Litcharts) | Precision14.8 | 41 | |
| Topic Relatedness Assessment | Literary passage sets 50 (Evaluation) | Very Related48 | 33 | |
| Topic Modeling | Wiki | Purity0.422 | 8 | |
| Topic Modeling | Bills | P1 Score0.399 | 8 | |
| Topic Modeling | Banking77 | Purity0.184 | 7 | |
| Topic Modeling | 50 literary passages Human Evaluation (test) | Topic 1 Score2.64 | 6 | |
| Hierarchical classification | Wikipedia (held-out set) | NMI0.7 | 3 | |
| Hierarchical classification | US Bills (held-out set) | NMI0.51 | 3 |