Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Pre-training without Natural Images

About

Is it possible to use convolutional neural networks pre-trained without any natural images to assist natural image understanding? The paper proposes a novel concept, Formula-driven Supervised Learning. We automatically generate image patterns and their category labels by assigning fractals, which are based on a natural law existing in the background knowledge of the real world. Theoretically, the use of automatically generated images instead of natural images in the pre-training phase allows us to generate an infinite scale dataset of labeled images. Although the models pre-trained with the proposed Fractal DataBase (FractalDB), a database without natural images, does not necessarily outperform models pre-trained with human annotated datasets at all settings, we are able to partially surpass the accuracy of ImageNet/Places pre-trained models. The image representation with the proposed FractalDB captures a unique feature in the visualization of convolutional layers and attentions.

Hirokatsu Kataoka, Kazushige Okayasu, Asato Matsumoto, Eisuke Yamagata, Ryosuke Yamada, Nakamasa Inoue, Akio Nakamura, Yutaka Satoh• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100
Accuracy81.6
691
Image ClassificationStanford Cars
Accuracy86
635
Image ClassificationImageNet-1k (val)--
543
Image ClassificationFood-101
Accuracy90.13
542
Image ClassificationTiny-ImageNet
Accuracy88.42
266
Image ClassificationCIFAR-10
Accuracy96.8
246
Image ClassificationOxford Flowers 102
Accuracy98.3
234
Image ClassificationSTL-10
Top-1 Accuracy98.46
146
Image ClassificationCIFAR-100
Accuracy88.35
117
Image ClassificationImageNet-100
Accuracy88.3
87
Showing 10 of 15 rows

Other info

Follow for update