Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multimodal Machine Unlearning Evaluation on MLLMU-Bench (test)

47.86Classification Accuracy

Vanilla

33.66437.349541.03544.7205Feb 21, 2025Apr 7, 2025May 23, 2025Jul 8, 2025Aug 23, 2025Oct 8, 2025Nov 23, 2025
Updated 1mo ago

Evaluation Results

MethodLinks
2025.11
47.860.5394.8923.01
2025.11
47.530.5024.0825.33
2025.11
47.410.515.225.43
2025.11
47.290.4794.2124.11
2025.11
46.810.4833.6724.56
2025.11
46.420.4084.2521.66
2025.11
45.20.3964.5420.04
2025.11
44.870.4154.1821.99
2025.11
44.440.3473.9120
2025.11
43.950.3583.8419.35
2025.11
43.410.3833.8316.19
2025.11
43.20.4393.7821.09
2025.11
42.750.423.2920.5
2025.11
42.670.3313.7218.81
2025.11
42.180.4013.6118.11
2025.11
40.870.4323.3516.92
2025.11
40.60.4213.1915.77
2025.11
40.150.4453.5217.88
2025.02
39.750.3553.5121.88
2025.11
39.640.3713.717.67
2025.11
39.330.4394.0117.88
2025.11
39.080.4143.0714.5
2025.11
38.40.3843.4716.47
2025.11
38.320.4213.0817.11
2025.02
35.550.3613.9120.97
2025.02
35.320.3333.116.66
2025.11
34.210.4073.0119.78