BIAS: A Biologically Inspired Algorithm for Video Saliency Detection
About
We present BIAS, a fast, biologically inspired model for dynamic visual saliency detection in continuous video streams. Building on the Itti--Koch framework, BIAS incorporates a retina-inspired motion detector to extract temporal features, enabling the generation of saliency maps that integrate both static and motion information. Foci of attention (FOAs) are identified using a greedy multi-Gaussian peak-fitting algorithm that balances winner-take-all competition with information maximization. BIAS detects salient regions with millisecond-scale latency and outperforms heuristic-based approaches and several deep-learning models on the DHF1K dataset, particularly in videos dominated by bottom-up attention. Applied to traffic accident analysis, BIAS demonstrates strong real-world utility, achieving state-of-the-art performance in cause-effect recognition and anticipating accidents up to 0.72 seconds before manual annotation with reliable accuracy. Overall, BIAS bridges biological plausibility and computational efficiency to achieve interpretable, high-speed dynamic saliency detection.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Video saliency prediction | DHF1K | AUC-J0.869 | 51 | |
| Traffic Accident Anticipation | Traffic Accident Prediction dataset | IoU (Threshold 0.1)89 | 10 | |
| Effect Segmentation | Traffic Accident Causality Recognition | mIoU (IoU >= 0.1)0.796 | 4 | |
| Cause Segmentation | Traffic Accident Causality Recognition | IoU (Threshold 0.1)51.3 | 4 |