Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VISER: Visually-Informed System for Enhanced Robustness in Open-Set Iris Presentation Attack Detection

About

Human perceptual priors have shown promise in saliency-guided deep learning training, particularly in the domain of iris presentation attack detection (PAD). Common saliency approaches include hand annotations obtained via mouse clicks and eye gaze heatmaps derived from eye tracking data. However, the most effective form of human saliency for open-set iris PAD remains underexplored. In this paper, we conduct a series of experiments comparing hand annotations, eye tracking heatmaps, segmentation masks, and DINOv2 embeddings to a state-of-the-art deep learning-based baseline on the task of open-set iris PAD. Results for open-set PAD in a leave-one-attack-type out paradigm indicate that denoised eye tracking heatmaps show the best generalization improvement over cross entropy in terms of Area Under the ROC curve (AUROC) and Attack Presentation Classification Error Rate (APCER) at Bona Fide Presentation Classification Error Rate (BPCER) of 1%. Along with this paper, we offer trained models, code, and saliency maps for reproducibility and to facilitate follow-up research efforts.

Byron Dowling, Eleanor Frederick, Jacob Piland, Adam Czajka• 2026

Related benchmarks

TaskDatasetResultRank
Iris Presentation Attack DetectionIris Presentation Attack Detection (PAD) Open-set (test)
Performance Score (Printout)0.0604
12
Presentation Attack DetectionIris Presentation Attack Detection Open-set (test)
Printout Score-0.2893
12
Showing 2 of 2 rows

Other info

Follow for update