Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-View Hypercomplex Learning for Breast Cancer Screening

About

Radiologists interpret mammography exams by jointly analyzing all four views, as correlations among them are crucial for accurate diagnosis. Recent methods employ dedicated fusion blocks to capture such dependencies, but these are often hindered by view dominance, training instability, and computational overhead. To address these challenges, we introduce multi-view hypercomplex learning, a novel learning paradigm for multi-view breast cancer classification based on parameterized hypercomplex neural networks (PHNNs). Thanks to hypercomplex algebra, our models intrinsically capture both intra- and inter-view relations. We propose PHResNets for two-view exams and two complementary four-view architectures: PHYBOnet, optimized for efficiency, and PHYSEnet, optimized for accuracy. Extensive experiments demonstrate that our approach consistently outperforms state-of-the-art multi-view models, while also generalizing across radiographic modalities and tasks such as disease classification from chest X-rays and multimodal brain tumor segmentation. Full code and pretrained models are available at https://github.com/ispamm/PHBreast.

Eleonora Lopez, Eleonora Grassucci, Danilo Comminiello• 2022

Related benchmarks

TaskDatasetResultRank
Chest X-ray classificationCheXpert (test)--
27
Two-view breast cancer classificationINBreast (test)
AUC0.793
13
ClassificationCBIS-DDSM mass
AUC0.739
11
Four-view mammography classificationINBreast (test)
AUC0.814
10
Two-view breast cancer classificationCBIS-DDSM mass and calc. Combined (test)
AUC0.677
5
Overall Survival PredictionBraTS 2019 (val)
Accuracy51.7
4
Four-view Breast Cancer ClassificationINbreast
AUC0.814
2
Brain Tumor SegmentationBraTS 19
Dice0.825
2
Showing 8 of 8 rows

Other info

Code

Follow for update