Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Non-Negative Bregman Divergence Minimization for Deep Direct Density Ratio Estimation

About

Density ratio estimation (DRE) is at the core of various machine learning tasks such as anomaly detection and domain adaptation. In existing studies on DRE, methods based on Bregman divergence (BD) minimization have been extensively studied. However, BD minimization when applied with highly flexible models, such as deep neural networks, tends to suffer from what we call train-loss hacking, which is a source of overfitting caused by a typical characteristic of empirical BD estimators. In this paper, to mitigate train-loss hacking, we propose a non-negative correction for empirical BD estimators. Theoretically, we confirm the soundness of the proposed method through a generalization error bound. Through our experiments, the proposed methods show a favorable performance in inlier-based outlier detection.

Masahiro Kato, Takeshi Teshima• 2020

Related benchmarks

TaskDatasetResultRank
Dataset Comparisonisolet (test)
Average Test AUC93.86
42
Dataset ComparisonMnist-r (test)
Average Test AUC82.96
42
Outlier DetectionIoT (test)
AUC85.2
17
Showing 3 of 3 rows

Other info

Follow for update