FairLLaVA: Fairness-Aware Parameter-Efficient Fine-Tuning for Large Vision-Language Assistants
About
While powerful in image-conditioned generation, multimodal large language models (MLLMs) can display uneven performance across demographic groups, highlighting fairness risks. In safety-critical clinical settings, such disparities risk producing unequal diagnostic narratives and eroding trust in AI-assisted decision-making. While fairness has been studied extensively in vision-only and language-only models, its impact on MLLMs remains largely underexplored. To address these biases, we introduce FairLLaVA, a parameter-efficient fine-tuning method that mitigates group disparities in visual instruction tuning without compromising overall performance. By minimizing the mutual information between target attributes, FairLLaVA regularizes the model's representations to be demographic-invariant. The method can be incorporated as a lightweight plug-in, maintaining efficiency with low-rank adapter fine-tuning, and provides an architecture-agnostic approach to fair visual instruction following. Extensive experiments on large-scale chest radiology report generation and dermoscopy visual question answering benchmarks show that FairLLaVA consistently reduces inter-group disparities while improving both equity-scaled clinical performance and natural language generation quality across diverse medical imaging modalities. Code can be accessed at https://github.com/bhosalems/FairLLaVA.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Radiology Report Generation | MIMIC-CXR Race | ES-BLEU-113.36 | 11 | |
| Radiology Report Generation | MIMIC-CXR Age Group | ES-BLEU-121.89 | 11 | |
| Radiology Report Generation | MIMIC-CXR Gender | ES-BLEU-124.89 | 11 | |
| Radiology Report Generation | PadChest | BLEU-1 Score (Age Group)2.53 | 10 | |
| Dermoscopy Visual Question Answering | HAM10000 | Gender Accuracy (ES)19.56 | 4 | |
| Radiology Report Evaluation | MIMIC-CXR | Race Distribution (%)69.21 | 2 |