| Task Name | Dataset Name | SOTA Result | Trend | |
|---|---|---|---|---|
| Machine Unlearning | MUSE-News Llama 2 7B | Privacy Leakage-99.8951 | 27 | |
| Machine Unlearning | MUSE Books | Privacy Leakage-76.1834 | 25 | |
| Reasoning Segmentation | MUSE (val) | gIoU (overall)48 | 21 | |
| Machine Unlearning | MUSE | VerbMem on DF0 | 16 | |
| Reasoning Segmentation | MUSE (test) | gIoU (overall)42.3 | 16 | |
| Unlearning | MUSE-Books Harry Potter 100 samples (forget set) | R-Forget32.13 | 13 | |
| Knowledge Retention | MUSE Retain set (Dr) | KnowMem56 | 9 | |
| Machine Unlearning | MUSE NEWS | VerbMem (Df)58.42 | 8 | |
| Relearning Attack | MUSE | RAP43 | 8 | |
| Bilingual Lexicon Induction | MUSE (test) | P@1 (en-es →)89.9 | 7 | |
| Cross-lingual Word Alignment | MUSE | Alignment Score (IT-EN)81.84 | 7 | |
| Multi-target reasoning segmentation | MUSE (val) | Overall gIoU52.4 | 6 | |
| Bilingual Lexicon Induction | MUSE zh-en (test) | Precision96.6 | 2 |