Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Do We Need Zero Training Loss After Achieving Zero Training Error?

About

Overparameterized deep networks have the capacity to memorize training data with zero \emph{training error}. Even after memorization, the \emph{training loss} continues to approach zero, making the model overconfident and the test performance degraded. Since existing regularizers do not directly aim to avoid zero training loss, it is hard to tune their hyperparameters in order to maintain a fixed/preset level of training loss. We propose a direct solution called \emph{flooding} that intentionally prevents further reduction of the training loss when it reaches a reasonably small value, which we call the \emph{flood level}. Our approach makes the loss float around the flood level by doing mini-batched gradient descent as usual but gradient ascent if the training loss is below the flood level. This can be implemented with one line of code and is compatible with any stochastic optimizer and other regularizers. With flooding, the model will continue to "random walk" with the same non-zero training loss, and we expect it to drift into an area with a flat loss landscape that leads to better generalization. We experimentally show that flooding improves performance and, as a byproduct, induces a double descent curve of the test loss.

Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, Masashi Sugiyama• 2020

Related benchmarks

TaskDatasetResultRank
AI-generated text detectionCross-genre (test)
OA87.5
32
AIGT detectionHC3 PWWS attack, AI to Human (in-domain)
Overall Accuracy100
28
AI-generated text detectionmixed-source AI -> Human GPT-2, GPT-Neo, GPT-J, LLaMa, GPT-3
Overall Accuracy96
26
AI-generated text detectionHC3 (test)
F1 (Overall)99.89
18
AI-generated text detectionCross-genre AIGT Overall (test)
OA85
14
AIGT detectionHC3 Deep-Word-Bug attack Overall (in-domain)
OA100
14
AIGT detectionHC3 Pruthi attack Overall (in-domain)
Overall Accuracy100
14
AIGT detectionHC3 Deep-Word-Bug attack AI to Human (in-domain)
Overall Accuracy100
14
AIGT detectionHC3 Pruthi attack AI to Human (in-domain)
Overall Accuracy100
14
AIGT detectioncross-domain AIGT detection AI -> Human
Overall Accuracy90
14
Showing 10 of 19 rows

Other info

Follow for update