Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Defect-aware Hybrid Prompt Optimization via Progressive Tuning for Zero-Shot Multi-type Anomaly Detection and Segmentation

About

Recent vision language models (VLMs) like CLIP have demonstrated impressive anomaly detection performance under significant distribution shift by utilizing high-level semantic information through text prompts. However, these models often neglect fine-grained details, such as which kind of anomalies, like "hole", "cut", "scratch" that could provide more specific insight into the nature of anomalies. We argue that recognizing fine-grained anomaly types 1) enriches the representation of "abnormal" with structured semantics, narrowing the gap between coarse anomaly signals and fine-grained defect categories; 2) enables manufacturers to understand the root causes of the anomaly and implement more targeted and appropriate corrective measures quickly. While incorporating such detailed semantic information is crucial, designing handcrafted prompts for each defect type is both time-consuming and susceptible to human bias. For this reason, we introduce DAPO, a novel approach for Defect-aware Prompt Optimization based on progressive tuning for the zero-shot multi-type and binary anomaly detection and segmentation under distribution shifts. Our approach aligns anomaly-relevant image features with their corresponding text semantics by learning hybrid defect-aware prompts with both fixed textual anchors and learnable token embeddings. We conducted experiments on public benchmarks (MPDD, VisA, MVTec-AD, MAD, and Real-IAD) and an internal dataset. The results suggest that compared to the baseline models, DAPO achieves a 3.7% average improvement in AUROC and average precision metrics at the image level under distribution shift, and a 6.5% average improvement in localizing novel anomaly types under zero-shot settings.

Nadeem Nazer, Hongkuan Zhou, Lavdim Halilaj, Ylli Sadikaj, Steffen Staab• 2025

Related benchmarks

TaskDatasetResultRank
Anomaly DetectionVisA--
199
Anomaly DetectionMPDD
Clean AUROC0.812
62
Anomaly DetectionMPDD (test)--
54
Anomaly SegmentationVisA (test)--
51
Anomaly SegmentationMPDD
AUROC0.951
31
Anomaly SegmentationVisA--
23
Anomaly DetectionReal-IAD
I-AUROC84.3
18
Anomaly SegmentationReal-IAD
AUROC (Pixel-Level)96.4
6
Binary Anomaly DetectionSemi-conductor
AUROC0.918
3
Multi-type Anomaly SegmentationMAD-Sim (test)
AUROC91.9
2
Showing 10 of 11 rows

Other info

Follow for update