Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Memory-Integrated Reconfigurable Adapters: A Unified Framework for Settings with Multiple Tasks

About

Organisms constantly pivot between tasks such as evading predators, foraging, traversing rugged terrain, and socializing, often within milliseconds. Remarkably, they preserve knowledge of once-learned environments sans catastrophic forgetting, a phenomenon neuroscientists hypothesize, is due to a singular neural circuitry dynamically overlayed by neuromodulatory agents such as dopamine and acetylcholine. In parallel, deep learning research addresses analogous challenges via domain generalization (DG) and continual learning (CL), yet these methods remain siloed, despite the brains ability to perform them seamlessly. In particular, prior work has not explored architectures involving associative memories (AMs), which are an integral part of biological systems, to jointly address these tasks. We propose Memory-Integrated Reconfigurable Adapters (MIRA), a unified framework that integrates Hopfield-style associative memory modules atop a shared backbone. Associative memory keys are learned post-hoc to index and retrieve an affine combination of stored adapter updates for any given task or domain on a per-sample basis. By varying only the task-specific objectives, we demonstrate that MIRA seamlessly accommodates domain shifts and sequential task exposures under one roof. Empirical evaluations on standard benchmarks confirm that our AM-augmented architecture significantly enhances adaptability and retention: in DG, MIRA achieves SoTA out-of-distribution accuracy, and in incremental learning settings, it outperforms architectures explicitly designed to handle catastrophic forgetting using generic CL algorithms. By unifying adapter-based modulation with biologically inspired associative memory, MIRA delivers rapid task switching and enduring knowledge retention in a single extensible architecture, charting a path toward more versatile and memory-augmented AI systems.

Susmit Agrawal, Krishn Vishwas Kher, Saksham Mittal, Swarnim Maheshwari, Vineeth N. Balasubramanian• 2025

Related benchmarks

TaskDatasetResultRank
Domain GeneralizationPACS, VLCS, OfficeHome, and DomainNet (test)
PACS Accuracy97.01
28
Class-incremental learningImageNet-R 5-task--
27
Domain-incremental learningCORe50
Avg Accuracy (A)93.89
22
Class-incremental learningCORe50
AVG Acc88.64
21
Class-incremental learningImageNet-R 10 tasks
Accuracy (10 Tasks)73.08
18
Class-incremental learningiDigits
Avg Acc.0.83
10
Class-incremental learningDomainNet
Avg Accuracy67.29
10
Domain-incremental learningiDigits
Avg Accuracy82.46
10
Domain-incremental learningDN4IL (test)
Last Accuracy78.4
7
Domain-incremental learningCDDB
Average Accuracy77.37
7
Showing 10 of 10 rows

Other info

Follow for update