Who are you? I'm PhD student doing DL research, preferably about weak/self supervision. Or even unsupervised things as well.
What happens? I'm writing here some reviews of papers I read.
Why the hell? Because it allows me to practice writing, and to understand papers I read deeper.
So what? I will be happy if it's somehow interesting to someone else. Anyways, here's my archive: https://www.notion.so/Self-Supervised-Boy-papers-reading-751aa85ffca948d28feacc45dc3cb0c0.
What happens? I'm writing here some reviews of papers I read.
Why the hell? Because it allows me to practice writing, and to understand papers I read deeper.
So what? I will be happy if it's somehow interesting to someone else. Anyways, here's my archive: https://www.notion.so/Self-Supervised-Boy-papers-reading-751aa85ffca948d28feacc45dc3cb0c0.
Ярослав's Notion on Notion
Self Supervised Boy papers reading
Channel in telegram.
Self-training über alles. Another paper on self-training by Le Quoc.
They compared self-training with supervised and self-supervised pre-training for different tasks. Self-training seemingly works better, while pre-training even hurts final quality when enough labeled data is available or strong augmentation is applied.
Main practical takeaway is, self-training adds quality even after pre-training. So, it could be worthy to self-train your baseline models to have better start.
More detailed with tables here: https://www.notion.so/Rethinking-Pre-training-and-Self-training-e00596e346fa4261af68db7409fbbde6
Source here: https://arxiv.org/pdf/2006.06882.pdf
They compared self-training with supervised and self-supervised pre-training for different tasks. Self-training seemingly works better, while pre-training even hurts final quality when enough labeled data is available or strong augmentation is applied.
Main practical takeaway is, self-training adds quality even after pre-training. So, it could be worthy to self-train your baseline models to have better start.
More detailed with tables here: https://www.notion.so/Rethinking-Pre-training-and-Self-training-e00596e346fa4261af68db7409fbbde6
Source here: https://arxiv.org/pdf/2006.06882.pdf
swanky-pleasure-bcf on Notion
Rethinking Pre-training and Self-training | Notion
Setup
Unsupervised segmentation with autoregressive models. Authors proposed to scan image with different scanning orders and request that the close pixels produce close embeddings independently of the scanning order.
SoTA across the unsupervised segmentations.
More detailed with images and losses here: https://www.notion.so/Autoregressive-Unsupervised-Image-Segmentation-211c6e8ec6174fe9929e53e5140e1024
Source here: https://arxiv.org/pdf/2007.08247.pdf
SoTA across the unsupervised segmentations.
More detailed with images and losses here: https://www.notion.so/Autoregressive-Unsupervised-Image-Segmentation-211c6e8ec6174fe9929e53e5140e1024
Source here: https://arxiv.org/pdf/2007.08247.pdf
swanky-pleasure-bcf on Notion
Autoregressive Unsupervised Image Segmentation | Notion
Setup