Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Understanding the Invertibility of Convolutional Neural Networks

About

Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible. To understand this approximate invertibility phenomenon and how to leverage it more effectively, we focus on a theoretical explanation and develop a mathematical model of sparse signal recovery that is consistent with CNNs with random weights. We give an exact connection to a particular model of model-based compressive sensing (and its recovery algorithms) and random-weight CNNs. We show empirically that several learned networks are consistent with our mathematical analysis and then demonstrate that with such a simple theoretical framework, we can obtain reasonable re- construction results on real images. We also discuss gaps between our model assumptions and the CNN trained for classification in practical scenarios.

Anna C. Gilbert, Yi Zhang, Kibok Lee, Yuting Zhang, Honglak Lee• 2017

Related benchmarks

TaskDatasetResultRank
CATE estimationACIC 77 datasets 2016 (out-of-sample)
% Best15.67
9
CATE estimationACIC 2018 (in-sample)
Percent Best23.64
9
CATE estimationACIC 24 datasets 2018 (out-of-sample)
Best Performance Ratio22.13
9
CATE estimationACIC 77 datasets 2016 (in-sample)
Percentage Best14.54
9
Showing 4 of 4 rows

Other info

Follow for update