Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning unbiased features

About

A key element in transfer learning is representation learning; if representations can be developed that expose the relevant factors underlying the data, then new tasks and domains can be learned readily based on mappings of these salient factors. We propose that an important aim for these representations are to be unbiased. Different forms of representation learning can be derived from alternative definitions of unwanted bias, e.g., bias to particular tasks, domains, or irrelevant underlying data dimensions. One very useful approach to estimating the amount of bias in a representation comes from maximum mean discrepancy (MMD) [5], a measure of distance between probability distributions. We are not the first to suggest that MMD can be a useful criterion in developing representations that apply across multiple domains or tasks [1]. However, in this paper we describe a number of novel applications of this criterion that we have devised, all based on the idea of developing unbiased representations. These formulations include: a standard domain adaptation framework; a method of learning invariant representations; an approach based on noise-insensitive autoencoders; and a novel form of generative model.

Yujia Li, Kevin Swersky, Richard Zemel• 2014

Related benchmarks

TaskDatasetResultRank
Person Identity ClassificationExtended Yale B (test)
Accuracy82
23
Medical Image ClassificationADNI (test)
Equivariance Gap (∆Eq)3.1
6
Medical Image ClassificationADCP (test)
Equivariance Gap (∆Eq)3.6
6
ClassificationAdult (test)
Equivariance Gap (∆Eq)3.4
6
Showing 4 of 4 rows

Other info

Follow for update