DeTeCtive: Detecting AI-generated Text via Multi-Level Contrastive Learning
About
Current techniques for detecting AI-generated text are largely confined to manual feature crafting and supervised binary classification paradigms. These methodologies typically lead to performance bottlenecks and unsatisfactory generalizability. Consequently, these methods are often inapplicable for out-of-distribution (OOD) data and newly emerged large language models (LLMs). In this paper, we revisit the task of AI-generated text detection. We argue that the key to accomplishing this task lies in distinguishing writing styles of different authors, rather than simply classifying the text into human-written or AI-generated text. To this end, we propose DeTeCtive, a multi-task auxiliary, multi-level contrastive learning framework. DeTeCtive is designed to facilitate the learning of distinct writing styles, combined with a dense information retrieval pipeline for AI-generated text detection. Our method is compatible with a range of text encoders. Extensive experiments demonstrate that our method enhances the ability of various text encoders in detecting AI-generated text across multiple benchmarks and achieves state-of-the-art results. Notably, in OOD zero-shot evaluation, our method outperforms existing approaches by a large margin. Moreover, we find our method boasts a Training-Free Incremental Adaptation (TFIA) capability towards OOD data, further enhancing its efficacy in OOD detection scenarios. We will open-source our code and models in hopes that our work will spark new thoughts in the field of AI-generated text detection, ensuring safe application of LLMs and enhancing compliance. Our code is available at https://github.com/heyongxin233/DeTeCtive.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Author Attribution | GHOSTWRITEBENCH OOD-Author | Macro F1 Score49 | 28 | |
| Author Attribution | GHOSTWRITEBENCH OOD-Domain | Macro F185 | 27 | |
| Author Attribution | GHOSTWRITEBENCH ID | Macro-F195 | 27 | |
| Fine-Grained LLM-Generated Text Detection | HART 4-class setting | AUROC95.74 | 13 | |
| LLM-generated text detection | HART (default random split) | Avg TPR @ 5% FPR92.82 | 12 | |
| Authorship Attribution Detection | TuringBench | Precision84.04 | 11 | |
| AI-generated text detection | Deepfake In-distribution, Cross-domains & Cross-models | Average Recall96.15 | 10 | |
| AI-generated text detection | Deepfake Out-of-distribution Unseen Models | Avg Recall0.9303 | 6 | |
| AI-generated text detection | Deepfake Out-of-distribution Unseen Domains | Avg Recall89.63 | 6 | |
| AI-generated text detection | M4 monolingual (test) | Avg Recall98.44 | 6 |