Pseudo Labelling on MNIST dataset
They are three different jupyter notebook files, one seems to be failing to achieve any improvements (Pseudo-Labelling-MNIST-1st), the second one achieves a maximum of 7 percent increase in accuracy (Pseudo-Labelling-MNIST-2nd) and the third model, which relies on augemntation, an increase of 14 percent (Pseudo-Labelling-MNIST-3rd)
Pre-trained with labelled images | Trained with Pseudo labelled images | |
1st | 67.7 % | 67.85 % (+0.15) |
2nd | 73.95 % | 81.75 % (+7.8) |
3rd | 73.95 % | 88.65 % (14.61) |
References:
1 - Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks, Dong-Hyun Lee http://deeplearning.net/wp-content/uploads/2013/03/pseudo_label_final.pdf
2 - Naive semi-supervised deep learning using pseudo-label, Zhun Li, ByungSoo Ko & Ho-Jin Choi https://link.springer.com/article/10.1007/s12083-018-0702-9
3 - The Illustrated FixMatch for Semi-Supervised Learning https://amitness.com/2020/03/fixmatch-semi-supervised/