Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Visualizing Adapted Knowledge in Domain Transfer

About

A source model trained on source data and a target model learned through unsupervised domain adaptation (UDA) usually encode different knowledge. To understand the adaptation process, we portray their knowledge difference with image translation. Specifically, we feed a translated image and its original version to the two models respectively, formulating two branches. Through updating the translated image, we force similar outputs from the two branches. When such requirements are met, differences between the two images can compensate for and hence represent the knowledge difference between models. To enforce similar outputs from the two branches and depict the adapted knowledge, we propose a source-free image translation method that generates source-style images using only target images and the two models. We visualize the adapted knowledge on several datasets with different UDA methods and find that generated images successfully capture the style difference between the two domains. For application, we show that generated images enable further tuning of the target model without accessing source data. Code available at https://github.com/hou-yz/DA_visualization.

Yunzhong Hou, Liang Zheng• 2021

Related benchmarks

TaskDatasetResultRank
Facial Expression RecognitionAff-Wild2 (10 target subjects)
Accuracy (Subject 1)58.42
18
Facial Expression RecognitionBioVid (target subjects (10))
Accuracy (Sub-1)76.85
9
Facial Expression RecognitionBAH 214 source subjects (10 target subjects)
Accuracy (Sub-1)56.83
9
Pain RecognitionBioVid 77 source subjects 10 target subjects
Subject 1 Accuracy80.92
9
Ambivalence/hesitancy recognitionBAH 10 target subjects, 214 source subjects
Subject 1 Performance60.15
9
Stress RecognitionStressID
Sub-1 Score70.41
9
Showing 6 of 6 rows

Other info

Follow for update