| Task Name | Dataset Name | SOTA Result | Trend | |
|---|---|---|---|---|
| Machine Unlearning | MUSE Books | Privacy Leakage-76.1834 | 35 | |
| Machine Unlearning | MUSE-News Llama 2 7B | Privacy Leakage-99.8951 | 27 | |
| Unlearning | MUSE-Books 1.0 (test) | Unlearn Score86 | 24 | |
| Reasoning Segmentation | MUSE (val) | gIoU (overall)48 | 21 | |
| Machine Unlearning | MUSE NEWS | VerbMem (Df)58.42 | 18 | |
| Machine Unlearning | MUSE | VerbMem on DF0 | 16 | |
| Reasoning Segmentation | MUSE (test) | gIoU (overall)42.3 | 16 | |
| Machine Unlearning | MUSE-Books Relearn 50% | Forgetting Score (No VerbMem)90.974 | 15 | |
| Machine Unlearning | MUSE (forget set (Df) and retain set (Dr)) | VerbMem (Df)58.4 | 15 | |
| Unlearning | MUSE-Books Harry Potter 100 samples (forget set) | R-Forget32.13 | 13 | |
| Machine Unlearning | MUSE News | Rel Score8.3 | 9 | |
| Machine Unlearning | MUSE Books | Rel7.55 | 9 | |
| Knowledge Retention | MUSE Retain set (Dr) | KnowMem56 | 9 | |
| Knowledge Unlearning | MUSE (forget set Df) | VerbMem Df Pre57.9 | 8 | |
| Relearning Attack | MUSE | RAP43 | 8 | |
| Bilingual Lexicon Induction | MUSE (test) | P@1 (en-es →)89.9 | 7 | |
| Cross-lingual Word Alignment | MUSE | Alignment Score (IT-EN)81.84 | 7 | |
| Multi-target reasoning segmentation | MUSE (val) | Overall gIoU52.4 | 6 | |
| Conversational Recommendation | MUSE Multimodal Fashion (test) | R@110.2 | 5 | |
| Conversational Recommendation | MUSE (n=200) | Recommendation Quality (Rec.Q)4.16 | 3 | |
| Bilingual Lexicon Induction | MUSE zh-en (test) | Precision96.6 | 2 |