Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Maximizing Asynchronicity in Event-based Neural Networks

About

Event cameras deliver visual data with high temporal resolution, low latency, and minimal redundancy, yet their asynchronous, sparse sequential nature challenges standard tensor-based machine learning (ML). While the recent asynchronous-to-synchronous (A2S) paradigm aims to bridge this gap by asynchronously encoding events into learned features for ML pipelines, existing A2S approaches often sacrifice expressivity and generalizability compared to dense, synchronous methods. This paper introduces EVA (EVent Asynchronous feature learning), a novel A2S framework to generate highly expressive and generalizable event-by-event features. Inspired by the analogy between events and language, EVA uniquely adapts advances from language modeling in linear attention and self-supervised learning for its construction. In demonstration, EVA outperforms prior A2S methods on recognition tasks (DVS128-Gesture and N-Cars), and represents the first A2S framework to successfully master demanding detection tasks, achieving a 0.477 mAP on the Gen1 dataset. These results underscore EVA's potential for advancing real-time event-based vision applications.

Haiqing Hao, Nikola Zubi\'c, Weihua He, Zhipeng Sui, Davide Scaramuzza, Wenhui Wang• 2025

Related benchmarks

TaskDatasetResultRank
Object DetectionGen1
mAP47.7
21
object recognitionN-Cars
Accuracy96.3
15
Object DetectionGen1 Detection
mAP47.7
14
Action RecognitionDVS128 Gesture
Specific Accuracy (SA)92.9
13
object recognitionN-Caltech101 (EST)
Accuracy (N-Caltech101 EST)86.3
10
Showing 5 of 5 rows

Other info

Follow for update