Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Navigating Conflicting Views: Harnessing Trust for Learning

About

Resolving conflicts is critical for improving the reliability of multi-view classification. While prior work focuses on learning consistent and informative representations across views, it often assumes perfect alignment and equal importance of all views, an assumption rarely met in real-world scenarios, as some views may express distinct information. To address this, we develop a computational trust-based discounting method that enhances the Evidential Multi-view framework by accounting for the instance-wise reliability of each view through a probability-sensitive trust mechanism. We evaluate our method on six real-world datasets using Top-1 Accuracy, Fleiss' Kappa, and a new metric, Multi-View Agreement with Ground Truth, to assess prediction reliability. We also assess the effectiveness of uncertainty in indicating prediction correctness via AUROC. Additionally, we test the scalability of our method through end-to-end training on a large-scale dataset. The experimental results show that computational trust can effectively resolve conflicts, paving the way for more reliable multi-view classification models in real-world applications. Codes available at: https://github.com/OverfitFlow/Trust4Conflict

Jueqing Lu, Wray Buntine, Yuanyuan Qi, Joanna Dipnall, Belinda Gabbe, Lan Du• 2024

Related benchmarks

TaskDatasetResultRank
Multi-view ClassificationPIE
Accuracy (PIE)95.59
16
Multi-view ClassificationCaltech
Accuracy99.11
16
Multi-view ClassificationNUS
Accuracy44.12
16
Multi-view ClassificationMSRC V1
Accuracy93.81
16
Multi-view ClassificationWebKB
Accuracy80.49
16
Multi-view ClassificationCaltech-6V
Accuracy93.97
16
Multi-view ClassificationBBC
Accuracy92.12
16
Multi-view ClassificationUCI
Accuracy96.9
16
Multi-view ClassificationCUB
Accuracy92.33
16
Multi-view ClassificationScene
Accuracy68.21
16
Showing 10 of 12 rows

Other info

Follow for update