ParaMETA: Towards Learning Disentangled Paralinguistic Speaking Styles Representations from Speech
About
Learning representative embeddings for different types of speaking styles, such as emotion, age, and gender, is critical for both recognition tasks (e.g., cognitive computing and human-computer interaction) and generative tasks (e.g., style-controllable speech generation). In this work, we introduce ParaMETA, a unified and flexible framework for learning and controlling speaking styles directly from speech. Unlike existing methods that rely on single-task models or cross-modal alignment, ParaMETA learns disentangled, task-specific embeddings by projecting speech into dedicated subspaces for each type of style. This design reduces inter-task interference, mitigates negative transfer, and allows a single model to handle multiple paralinguistic tasks such as emotion, gender, age, and language classification. Beyond recognition, ParaMETA enables fine-grained style control in Text-To-Speech (TTS) generative models. It supports both speech- and text-based prompting and allows users to modify one speaking styles while preserving others. Extensive experiments demonstrate that ParaMETA outperforms strong baselines in classification accuracy and generates more natural and expressive speech, while maintaining a lightweight and efficient model suitable for real-world applications.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Age Classification | Combined speech dataset (Baker, LJSpeech, ESD, CREMA-D, Genshin Impact) 1.0 (subject-independent) | Acc (B)0.382 | 19 | |
| Gender Classification | Combined speech dataset (Baker, LJSpeech, ESD, CREMA-D, Genshin Impact) 1.0 (subject-independent) | Balanced Acc0.784 | 19 | |
| Language Classification | Combined speech dataset (Baker, LJSpeech, ESD, CREMA-D, Genshin Impact) 1.0 (subject-independent) | Balanced Acc0.928 | 19 | |
| Emotion Classification | Combined speech dataset (Baker, LJSpeech, ESD, CREMA-D, Genshin Impact) 1.0 (subject-independent) | Accuracy (B)0.516 | 19 | |
| Style-Controllable TTS Generation | Style-Controllable TTS (evaluation set) | N-MOS3.41 | 4 |