| Task Name | Dataset Name | SOTA Result | Trend | |
|---|---|---|---|---|
| Personalized Scholarly Refinement | LaMP-5 | Rouge-L40.5 | 32 | |
| Tweet Paraphrasing/Generation | LaMP Tweet | ROUGE-142.2 | 24 | |
| Scholarly Abstract Generation | LaMP Scholar | ROUGE-144.6 | 24 | |
| News Headline Generation | LaMP News | RG118.8 | 24 | |
| Product Rating Prediction | LaMP Rating | MAE0.236 | 24 | |
| Movie Recommendation | LaMP Movie | Accuracy57 | 24 | |
| Citation Recommendation | LaMP Citation | Accuracy73.8 | 24 | |
| Maximum Inner Product Search | LaMP Rating (test) | Top-1 Accuracy100 | 14 | |
| Maximum Inner Product Search | LaMP Movie (test) | Top-1 Accuracy100 | 14 | |
| Personalized Question Answering | LaMP-QA | Accuracy (Arts & Entertainment)53.27 | 10 | |
| Text generation | LaMP-4 (test) | ROUGE-121.1 | 9 | |
| Ordinal classification | LaMP-3 (test) | MAE0.242 | 9 | |
| Categorical classification | LaMP-2 (test) | Accuracy55.9 | 9 | |
| Binary classification | LaMP 1 (test) | Accuracy70 | 9 | |
| LaMP-5 Personalization | LaMP-5 (val) | ROUGE-10.487 | 9 | |
| LaMP-4 Personalization | LaMP-4 (val) | ROUGE-10.216 | 9 | |
| LaMP-3 Personalization | LaMP-3 (val) | MAE0.231 | 9 | |
| LaMP-1 Personalization | LaMP-1 (val) | Accuracy68.2 | 9 | |
| Language Model Personalization | LaMP standard (full-data) | LaMP-1 Score0.735 | 8 | |
| Language Model Personalization | LaMP few-shot personalization setting | LaMP-1 Accuracy52 | 8 | |
| Personalization | LaMP-4 | ROUGE-121.2 | 8 | |
| Personalization | LaMP-2 | Acc67.9 | 8 | |
| Personalization | LaMP-1 | Accuracy65.6 | 8 | |
| Scholarly Title Generation | LaMP-5 1.0 (test) | ROUGE-10.483 | 8 | |
| News Headline Generation | LaMP-4 1.0 (test) | ROUGE-10.188 | 8 |