AEM: Attention Entropy Maximization for Multiple Instance Learning based Whole Slide Image Classification
About
Multiple Instance Learning (MIL) effectively analyzes whole slide images but faces overfitting due to attention over-concentration. While existing solutions rely on complex architectural modifications or additional processing steps, we introduce Attention Entropy Maximization (AEM), a simple yet effective regularization technique. Our investigation reveals the positive correlation between attention entropy and model performance. Building on this insight, we integrate AEM regularization into the MIL framework to penalize excessive attention concentration. To address sensitivity to the AEM weight parameter, we implement Cosine Weight Annealing, reducing parameter dependency. Extensive evaluations demonstrate AEM's superior performance across diverse feature extractors, MIL frameworks, attention mechanisms, and augmentation techniques. Here is our anonymous code: https://github.com/dazhangyu123/AEM.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Whole Slide Image classification | CAMELYON16 (test) | AUC0.998 | 127 | |
| WSI Classification | CAMELYON17 (test) | AUC90.5 | 33 | |
| Whole Slide Image classification | LBC (test) | F1 Score0.691 | 24 |