Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Benchmarking Foundation Models for Zero-Shot Biometric Tasks

About

The advent of foundation models, particularly Vision-Language Models (VLMs) and Multi-modal Large Language Models (MLLMs), has redefined the frontiers of artificial intelligence, enabling remarkable generalization across diverse tasks with minimal or no supervision. Yet, their potential in biometric recognition and analysis remains relatively underexplored. In this work, we introduce a comprehensive benchmark that evaluates the zero-shot and few-shot performance of state-of-the-art publicly available VLMs and MLLMs across six biometric tasks spanning the face and iris modalities: face verification, soft biometric attribute prediction (gender and race), iris recognition, presentation attack detection (PAD), and face manipulation detection (morphs and deepfakes). A total of 41 VLMs were used in this evaluation. Experiments show that embeddings from these foundation models can be used for diverse biometric tasks with varying degrees of success. For example, in the case of face verification, a True Match Rate (TMR) of 96.77 percent was obtained at a False Match Rate (FMR) of 1 percent on the Labeled Face in the Wild (LFW) dataset, without any fine-tuning. In the case of iris recognition, the TMR at 1 percent FMR on the IITD-R-Full dataset was 97.55 percent without any fine-tuning. Further, we show that applying a simple classifier head to these embeddings can help perform DeepFake detection for faces, Presentation Attack Detection (PAD) for irides, and extract soft biometric attributes like gender and ethnicity from faces with reasonably high accuracy. This work reiterates the potential of pretrained models in achieving the long-term vision of Artificial General Intelligence.

Redwan Sony, Parisa Farmanifard, Hamzeh Alzwairy, Nitish Shukla, Arun Ross• 2025

Related benchmarks

TaskDatasetResultRank
Presentation Attack DetectionMSIrPAD (train on Artefact #2, test on remaining artifacts)
D-EER44.44
6
Iris Presentation Attack DetectionMSIrPAD Train on Artefact #4, Test on remaining artifacts 1.0 (cross-artifact)
D-EER40.4
6
Presentation Attack DetectionMSIrPAD Train: Artefact #8, Test: Remaining (cross-artifact evaluation)
D-EER37.25
6
Presentation Attack DetectionMSIrPAD Train on Artefact #1, Test on remaining (test)
D-EER46.57
6
Presentation Attack DetectionMSIrPAD Cross-artifact Train on Artefact #3 (test)
D-EER0.3931
6
Presentation Attack DetectionMSIrPAD Cross-artifact Artefact #6 (test)
D-EER47.76
6
Presentation Attack DetectionMSIrPAD Cross-artifact (Train: Artefact #5, Test: Remaining artifacts)
D-EER48.67
6
Showing 7 of 7 rows

Other info

Follow for update