Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MMA: Multimodal Memory Agent

About

Long-horizon multimodal agents depend on external memory; however, similarity-based retrieval often surfaces stale, low-credibility, or conflicting items, which can trigger overconfident errors. We propose Multimodal Memory Agent (MMA), which assigns each retrieved memory item a dynamic reliability score by combining source credibility, temporal decay, and conflict-aware network consensus, and uses this signal to reweight evidence and abstain when support is insufficient. We also introduce MMA-Bench, a programmatically generated benchmark for belief dynamics with controlled speaker reliability and structured text-vision contradictions. Using this framework, we uncover the "Visual Placebo Effect", revealing how RAG-based agents inherit latent visual biases from foundation models. On FEVER, MMA matches baseline accuracy while reducing variance by 35.2% and improving selective utility; on LoCoMo, a safety-oriented configuration improves actionable accuracy and reduces wrong answers; on MMA-Bench, MMA reaches 41.18% Type-B accuracy in Vision mode, while the baseline collapses to 0.0% under the same protocol. Code: https://github.com/AIGeeksGroup/MMA.

Yihao Lu, Wanru Cheng, Zeyu Zhang, Hao Tang• 2026

Related benchmarks

TaskDatasetResultRank
Long-term dialogue memoryLoCoMo (test)
Accuracy75.94
15
Multimodal Fact-checkingMMA-Bench
Core Accuracy13.55
4
Fact VerificationFEVER
Accuracy59.93
2
Showing 3 of 3 rows

Other info

GitHub

Follow for update