Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

QVAD: A Question-Centric Agentic Framework for Efficient and Training-Free Video Anomaly Detection

About

Video Anomaly Detection (VAD) is a fundamental challenge in computer vision, particularly due to the open-set nature of anomalies. While recent training-free approaches utilizing Vision-Language Models (VLMs) have shown promise, they typically rely on massive, resource-intensive foundation models to compensate for the ambiguity of static prompts. We argue that the bottleneck in VAD is not necessarily model capacity, but rather the static nature of inquiry. We propose QVAD, a question-centric agentic framework that treats VLM-LLM interaction as a dynamic dialogue. By iteratively refining queries based on visual context, our LLM agent guides smaller VLMs to produce high-fidelity captions and precise semantic reasoning without parameter updates. This ``prompt-updating" mechanism effectively unlocks the latent capabilities of lightweight models, enabling state-of-the-art performance on UCF-Crime, XD-Violence, and UBNormal using a fraction of the parameters required by competing methods. We further demonstrate exceptional generalizability on the single-scene ComplexVAD dataset. Crucially, QVAD achieves high inference speeds with minimal memory footprints, making advanced VAD capabilities deployable on resource-constrained edge devices.

Lokman Bekit, Hamza Karim, Nghia T Nguyen, Yasin Yilmaz• 2026

Related benchmarks

TaskDatasetResultRank
Video Anomaly DetectionUCF-Crime
AUC84.28
218
Video Anomaly DetectionXD-Violence
AP68.53
93
Video Anomaly DetectionUBnormal (test)
AUC79.6
44
Video Anomaly DetectionComplexVAD (test)
AUC0.6802
4
Showing 4 of 4 rows

Other info

Follow for update