Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Lightweight Fairness for LLM-Based Recommendations via Kernelized Projection and Gated Adapters

About

Large Language Models (LLMs) have introduced new capabilities to recommender systems, enabling dynamic, context-aware, and conversational recommendations. However, LLM-based recommender systems inherit and may amplify social biases embedded in their pre-training data, especially when demographic cues are present. Existing fairness solutions either require extra parameters fine-tuning, or suffer from optimization instability. We propose a lightweight and scalable bias mitigation method that combines a kernelized Iterative Null-space Projection (INLP) with a gated Mixture-of-Experts (MoE) adapter. Our approach estimates a closed-form projection that removes single or multiple sensitive attributes from LLM representations with no additional trainable parameters. To preserve task utility, we introduce a two-level MoE adapter that selectively restores useful signals without reintroducing bias. Experiments on two public datasets show that our method reduces attribute leakage across multiple protected variables while maintaining competitive recommendation accuracy.

Nan Cui, Wendy Hui Wang, Yue Ning• 2026

Related benchmarks

TaskDatasetResultRank
Sequential RecommendationMovieLens 1M (test)
Hit@1081.67
42
Direct RecommendationMovieLens
Hit Rate @ 143.2
8
Direct RecommendationInsurance
Hit@157.37
8
Showing 3 of 3 rows

Other info

Follow for update