Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

End-to-End Weak Supervision

About

Aggregating multiple sources of weak supervision (WS) can ease the data-labeling bottleneck prevalent in many machine learning applications, by replacing the tedious manual collection of ground truth labels. Current state of the art approaches that do not use any labeled training data, however, require two separate modeling steps: Learning a probabilistic latent variable model based on the WS sources -- making assumptions that rarely hold in practice -- followed by downstream model training. Importantly, the first step of modeling does not consider the performance of the downstream model. To address these caveats we propose an end-to-end approach for directly learning the downstream model by maximizing its agreement with probabilistic labels generated by reparameterizing previous probabilistic posteriors with a neural network. Our results show improved performance over prior work in terms of end model performance on downstream test sets, as well as in terms of improved robustness to dependencies among weak supervision sources.

Salva R\"uhling Cachay, Benedikt Boecking, Artur Dubrawski• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationLabelMe (test)
Accuracy86.36
19
Binary ClassificationIMDB 12 LFs (test)
F1 Score77.22
16
Weakly Supervised Text ClassificationProfTeacher 99 LFs (test)
F186.98
8
Weakly Supervised Text ClassificationAmazon 175 LFs (test)
F1 Score86.6
8
Weakly Supervised Text ClassificationIMDB 136 LFs (test)
F1 Score82.1
8
Binary ClassificationSpouse 9 LFs (test)
F1 Score51.98
7
Weakly Supervised Text ClassificationSpouse 9 LFs (test)
F1 Score51.98
7
Binary ClassificationProfTeacher 99 LFs (test)
F1 Score86.98
7
Binary ClassificationAmazon 175 LFs (test)
F1 Score86.6
7
Binary ClassificationIMDB 136 LFs (test)
F1 Score0.821
7
Showing 10 of 10 rows

Other info

Code

Follow for update