Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

State Space Models are Effective Sign Language Learners: Exploiting Phonological Compositionality for Vocabulary-Scale Recognition

About

Sign language recognition suffers from catastrophic scaling failure: models achieving high accuracy on small vocabularies collapse at realistic sizes. Existing architectures treat signs as atomic visual patterns, learning flat representations that cannot exploit the compositional structure of sign languages-systematically organized from discrete phonological parameters (handshape, location, movement, orientation) reused across the vocabulary. We introduce PHONSSM, enforcing phonological decomposition through anatomically-grounded graph attention, explicit factorization into orthogonal subspaces, and prototypical classification enabling few-shot transfer. Using skeleton data alone on the largest ASL dataset ever assembled (5,565 signs), PHONSSM achieves 72.1% on WLASL2000 (+18.4pp over skeleton SOTA), surpassing most RGB methods without video input. Gains are most dramatic in the few-shot regime (+225% relative), and the model transfers zero-shot to ASL Citizen, exceeding supervised RGB baselines. The vocabulary scaling bottleneck is fundamentally a representation learning problem, solvable through compositional inductive biases mirroring linguistic structure.

Bryan Cheng, Austin Jin, Jasper Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Sign Language RecognitionWLASL 100
Top-1 Accuracy88.37
9
Sign Language RecognitionWLASL 2000
Top-1 Accuracy72.08
3
Sign Language RecognitionWLASL 1000
Top-1 Accuracy62.9
3
Sign Language RecognitionMerged-5565
Top-1 Accuracy53.34
2
Showing 4 of 4 rows

Other info

Follow for update