Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A Human-Inspired Decoupled Architecture for Efficient Audio Representation Learning

About

While self-supervised learning (SSL) has revolutionized audio representation, the excessive parameterization and quadratic computational cost of standard Transformers limit their deployment on resource-constrained devices. To address this bottleneck, we propose HEAR (Human-inspired Efficient Audio Representation), a novel decoupled architecture. Inspired by the human cognitive ability to isolate local acoustic features from global context, HEAR splits the processing pipeline into two dedicated modules: an Acoustic Model for local feature extraction and a Task Model for global semantic integration. Coupled with an Acoustic Tokenizer trained via knowledge distillation, our approach enables robust Masked Audio Modeling (MAM). Extensive experiments demonstrate that HEAR requires only 15M parameters and 9.47 GFLOPs for inference, operating at a fraction of the computational cost of conventional foundation models (which typically require 85M-94M parameters). Despite this high efficiency, HEAR achieves highly competitive performance across diverse audio classification benchmarks. The code and pre-trained models are available at https://github.com/HarunoriKawano/HEAR

Harunori Kawano, Takeshi Sasaki• 2026

Related benchmarks

TaskDatasetResultRank
Speech Command RecognitionGoogle SC v1 (test)
Accuracy94.3
11
Speaker IdentificationVoxCeleb
Accuracy87.9
5
Speech Command RecognitionGSC v2
Accuracy95.1
4
Showing 3 of 3 rows

Other info

Follow for update