Self-supervision pre-training for brain cortex segmentation: a paper from MICCAI-2018.
Quite old (for this boiling-hot area) paper, although with interesting take. Authors set up the metric learning pre-training, but instead of the 3D metric they estimated the geodesic distance on the brain surface between cuts taken orthogonal to the surface. Why? Because the brain cortex is a relatively thin structure along the curved brain surface. And therefore areas are separated not as the 3D space patches, but as patches on this surface. Authors demonstrate how predicted distance between adjacent slices then aligns with the ground truth borders of the areas.
Despite presented result is better then the naïve baseline, I wouldn't be astonished if the other pre-training techniques emerged since then, would provide good results as well.
With a bit more words and one formula here.
Original on there.
Quite old (for this boiling-hot area) paper, although with interesting take. Authors set up the metric learning pre-training, but instead of the 3D metric they estimated the geodesic distance on the brain surface between cuts taken orthogonal to the surface. Why? Because the brain cortex is a relatively thin structure along the curved brain surface. And therefore areas are separated not as the 3D space patches, but as patches on this surface. Authors demonstrate how predicted distance between adjacent slices then aligns with the ground truth borders of the areas.
Despite presented result is better then the naïve baseline, I wouldn't be astonished if the other pre-training techniques emerged since then, would provide good results as well.
With a bit more words and one formula here.
Original on there.
Ярослав's Notion on Notion
Improving Cytoarchitectonic Segmentation of Human Brain Areas with Self-supervised Siamese Networks
In this paper authors proposed very specific yet beneficial loss for pre-training of the segmentation encoder. This loss is based on the knowledge of the brain cortex structure specifics. Authors proposed to predict distance between two dissections not in…
Instance Localization for Self-supervised Detection Pretraining. The paper on importance of the task-specific pre-training.
Authors hypothesised about problems of the popular self-supervised pre-training frameworks w.r.t. the task of localisation. They come to the idea that there are no losses to enforce localisation of the object representations. Therefore authors proposed a new loss. To make two contrastive representation of one image, they crop two random parts of it and paste those parts to random images from the same dataset. Later, they use neural network to embed those images, but contrast only region of embedding related to the pasted image. Instead of contrasting the whole image embedding as it is done usually.
Interestingly, the proposed loss not only provides SotA pre-training for the localisation task, but also degrades the classification quality. This is somewhat important practical finding, that while general representations becoming better and better, it could be more important to have task-specific pre-training, than the SotA, but tailored for another task.
More detailed here.
Source here.
Authors hypothesised about problems of the popular self-supervised pre-training frameworks w.r.t. the task of localisation. They come to the idea that there are no losses to enforce localisation of the object representations. Therefore authors proposed a new loss. To make two contrastive representation of one image, they crop two random parts of it and paste those parts to random images from the same dataset. Later, they use neural network to embed those images, but contrast only region of embedding related to the pasted image. Instead of contrasting the whole image embedding as it is done usually.
Interestingly, the proposed loss not only provides SotA pre-training for the localisation task, but also degrades the classification quality. This is somewhat important practical finding, that while general representations becoming better and better, it could be more important to have task-specific pre-training, than the SotA, but tailored for another task.
More detailed here.
Source here.
Ярослав's Notion on Notion
Instance Localization for Self-supervised Detection Pretraining
Since the better pre-training for classification task doesn't always implies better localisation quality, authors proposed idea of task-specific pre-training for the localisation task. The key point of the proposed method is to formulate task, which will…
SelfReg — paper on contrastive learning towards domain generalisation.
Domain generalisation methods are focused on training models which will not need transfer to work on new domains. Authors proposed to adapt the popular contrastive learning framework to this task.
To provide positive pair, they sample two examples of the same class from different domains. Compared to classical contrastive learning it is, like, having different domains instead of different augmentations, and different classes instead of different samples.
To avoid burden of the good negative sample mining authors adapted the BYOL idea, and employed a projection network to avoid representation collapse.
Suppose having
As the loss itself authors used two squared L2 distances:
1.
NB! in the second loss, the right part is linear mixture of sample representations from different domains.
By minimising the presented loss alongside with classification loss itself, authors achieved pretty separated latent space representation, and got close to the SotA without additional tricks.
Source could be found here.
Domain generalisation methods are focused on training models which will not need transfer to work on new domains. Authors proposed to adapt the popular contrastive learning framework to this task.
To provide positive pair, they sample two examples of the same class from different domains. Compared to classical contrastive learning it is, like, having different domains instead of different augmentations, and different classes instead of different samples.
To avoid burden of the good negative sample mining authors adapted the BYOL idea, and employed a projection network to avoid representation collapse.
Suppose having
f
as the neural network under training, g
as a trainable linear layer to gain projection of the representation and x_ck
— random sample of the class c
and domain k
.As the loss itself authors used two squared L2 distances:
1.
|f(x_cj) - g(f(x_ck))|
2. |f(x_cj) - (l*g(f(x_cj)) + (1-l)*g(f(x_ck)))|
. Where l ~ Beta
.NB! in the second loss, the right part is linear mixture of sample representations from different domains.
By minimising the presented loss alongside with classification loss itself, authors achieved pretty separated latent space representation, and got close to the SotA without additional tricks.
Source could be found here.
Exploring Visual Engagement Signals for Representation Learning: recent arxiv paper from Facebook with interesting source of the supervision for training.
In this paper authors proposed to use comments and reaction (facebookish likes) as the source of supervision for pre-training. The proposed method is simple and is therefore well scalable. For each image authors collect two pseudo-labels:
1. All reactions are counted and normalised to sum to 1. This is used as a label for cross-entropy loss.
2. Each comment is converted to bag of words, then weight this embedding via TF-IDF and assign cluster id with kNN (where "fitting" of the clustering is done on random subset of comments from the same dataset). Cluster ids of all comments to the image are used together as a target for a multi-label classification loss.
This approach shows slight to medium increase on the tasks which are well-related to the multi-modal learning. E.g. memes intent classification or political leaning of an image.
While method description is somewhat messy and method itself requires enormous training time (10 days on 32 V100, chances are additional markup could be cheaper) this paper shows again interesting idea of getting supervision for pre-training.
In this paper authors proposed to use comments and reaction (facebookish likes) as the source of supervision for pre-training. The proposed method is simple and is therefore well scalable. For each image authors collect two pseudo-labels:
1. All reactions are counted and normalised to sum to 1. This is used as a label for cross-entropy loss.
2. Each comment is converted to bag of words, then weight this embedding via TF-IDF and assign cluster id with kNN (where "fitting" of the clustering is done on random subset of comments from the same dataset). Cluster ids of all comments to the image are used together as a target for a multi-label classification loss.
This approach shows slight to medium increase on the tasks which are well-related to the multi-modal learning. E.g. memes intent classification or political leaning of an image.
While method description is somewhat messy and method itself requires enormous training time (10 days on 32 V100, chances are additional markup could be cheaper) this paper shows again interesting idea of getting supervision for pre-training.
Variance-Invariance-Covariance Regularisation: fresh paper on the self-supervised training. Kinda follow-up to the idea raised by the Barlow Twins.
In this paper authors proposed three-fold loss function which:
1. Prevents representation collapse by enforcing high variance across different embedding vectors.
2. Decreasing representation redundancy by decorrelating dimensions of the embedded space.
3. Enforcing invariance of the embedded vectors to the different augmentations by pulling different embeddings of the same image together.
This loss helps to avoid both the burden of negative samples mining and crafty tricks employed by other methods. Authors demonstrate, that their method is somewhat on par with the SoTA, while avoiding all that and having some additional benefits. e.g. free of explicit normalisations and somewhat free of batch-size dependency.
A bit longer overview here.
Source here.
In this paper authors proposed three-fold loss function which:
1. Prevents representation collapse by enforcing high variance across different embedding vectors.
2. Decreasing representation redundancy by decorrelating dimensions of the embedded space.
3. Enforcing invariance of the embedded vectors to the different augmentations by pulling different embeddings of the same image together.
This loss helps to avoid both the burden of negative samples mining and crafty tricks employed by other methods. Authors demonstrate, that their method is somewhat on par with the SoTA, while avoiding all that and having some additional benefits. e.g. free of explicit normalisations and somewhat free of batch-size dependency.
A bit longer overview here.
Source here.
Ярослав's Notion on Notion
VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning
Another approach trying to develop losses which does not require neither tedious negative sampling procedures, nor vague architecture tricks. This one falls in line with the Barlow Twins approach presented by the same FAIR lab earlier.
Not much more to say about it. It's emerging topic of building transformers for computer vision tasks. And this is more or less technical paper, to show progress of the transformers towards replacing ResNet-like architectures wherever they are still used this time (not the first attempt, though) in self-supervision.
The loss itself is superposition of the MoCov2 and BYOL approaches. It simultaneously has queue of the negative examples and contrastive loss from MoCov2 with model-level asymmetry from BYOL, where one branch is momentum-updated.
Results are on par with SoTA for linear evaluation scheme. But the method (1) have lesser complex tricks to achieve and (2) is applicable to transfer-learning towards detection and segmentation.
The loss itself is superposition of the MoCov2 and BYOL approaches. It simultaneously has queue of the negative examples and contrastive loss from MoCov2 with model-level asymmetry from BYOL, where one branch is momentum-updated.
Results are on par with SoTA for linear evaluation scheme. But the method (1) have lesser complex tricks to achieve and (2) is applicable to transfer-learning towards detection and segmentation.
👍1
Forwarded from Just links
Self-Supervised Learning with Swin Transformers https://arxiv.org/abs/2105.04553
Contrastive Conditional Transport for Representation Learning.
This paper tries to make the step similar to GAN->WGAN step, although in terms of the representation learning. At first authors propose the idea, that instead of the training with more-or-less classical SimCLR loss, authors proposed to just minimize (C+) - (C-), where (C+) is mean distance between anchor and positive samples (different views of the anchor) and (C-) is mean distance between anchor and negative samples.
Although, when this does not work out, authors proposed to add more complex weighting procedure: now, positive samples are weighted with respect to their distance to the anchor (more distance — larger weight) and vice versa goes for the negative samples (less distance — larger weight).
Despite description of the idea is somewhat chaotic, the reported results looks good. Also, one more positive side-effect: this loss easily works with multiple positive samples drawn in one minibatch.
Source here.
This paper tries to make the step similar to GAN->WGAN step, although in terms of the representation learning. At first authors propose the idea, that instead of the training with more-or-less classical SimCLR loss, authors proposed to just minimize (C+) - (C-), where (C+) is mean distance between anchor and positive samples (different views of the anchor) and (C-) is mean distance between anchor and negative samples.
Although, when this does not work out, authors proposed to add more complex weighting procedure: now, positive samples are weighted with respect to their distance to the anchor (more distance — larger weight) and vice versa goes for the negative samples (less distance — larger weight).
Despite description of the idea is somewhat chaotic, the reported results looks good. Also, one more positive side-effect: this loss easily works with multiple positive samples drawn in one minibatch.
Source here.
It was quite a vacation, huh. Now back to the matter.
Object-aware Contrastive Learning for Debiased Scene Representation, from current NeurIPS.
The authors proposed to alter the Class Activation Map method a bit, to set it ready for contrastive learning. They named the thing ContraCAM. It's just the usual CAM with:
1. loss replaced with contrastive loss
2. negative gradients dropped
3. iterative accumulation of the masks.
And this itself shows unsupervised object localization, with SoTA IoU.
Based on this localization, the authors proposed two augmentations to reduce negative biases in contrastive learning:
1. guided random crop, to avoid having multiple objects on one image; this avoids over-reliance on co-occurring objects.
2. replacing background (using a soft mask of the localization); this helps to avoid over-reliance on the typical background for the sample.
Since localization is gained without additional information, this is still a self-supervised approach, and therefore could be directly compared with them.
Authors compare those augmentations with self-supervised localization and ground truth masks. They found that both ways can produce a notable boost to the MoCov2 or BYOL results.
More and with images here.
Source here.
Object-aware Contrastive Learning for Debiased Scene Representation, from current NeurIPS.
The authors proposed to alter the Class Activation Map method a bit, to set it ready for contrastive learning. They named the thing ContraCAM. It's just the usual CAM with:
1. loss replaced with contrastive loss
2. negative gradients dropped
3. iterative accumulation of the masks.
And this itself shows unsupervised object localization, with SoTA IoU.
Based on this localization, the authors proposed two augmentations to reduce negative biases in contrastive learning:
1. guided random crop, to avoid having multiple objects on one image; this avoids over-reliance on co-occurring objects.
2. replacing background (using a soft mask of the localization); this helps to avoid over-reliance on the typical background for the sample.
Since localization is gained without additional information, this is still a self-supervised approach, and therefore could be directly compared with them.
Authors compare those augmentations with self-supervised localization and ground truth masks. They found that both ways can produce a notable boost to the MoCov2 or BYOL results.
More and with images here.
Source here.
Ярослав's Notion on Notion
Object-aware Contrastive Learning for Debiased Scene Representation
In this paper, the authors propose to modify Class Activation Map w.r.t. self-supervised losses and create ContraCAM. Thus allowing unsupervised object localization by network trained with self-supervised losses. With this localization in mind authors propose…
👍1
PiCIE Unsupervised Semantic Segmentation using Invariance and Equivariance in Clustering
Paper from CVPR'21
There is one of more or less classical approaches to deep unsupervised segmentation: cluster your embeddings and use it as pseudo labels. Add tricks, repeat multiple times. In this paper the authors made one step further to unite it with self-supervision. They designed a loss function to enforce invariance of these clustered representation to the color augmentations and equivariance to the spatial augmentations.
The algorithm of the loss calculation is:
1. Get two representations of the same image. Both disturbed with different color augmentations but with same spatial augmentation. in first case image is disturbed before going through the network and in second — output of the network is disturbed. So, ideally both outputs should be identical, it will show invariance and equivariance to color and spatial augmentations respectively. I will name these representations
2. For each of those outputs, run KMeans clustering of the embeddings. I will name obtained centroids
3. The next step is going to finally mix those two spaces. Let say that
3.1. We enforce clustering in each representation with
3.2. We enforce that this clustering itself should hold across the representations with
And that's it. Training with this approach achieves SoTA on the unsupervised segmentation and shows qualitatively good object masks. The most improved part of the segmentation is thing (foreground object) segmentation, which is systematically problematic for unsupervised learning, because of the huge imbalance in class sizes.
More here.
Source here.
Paper from CVPR'21
There is one of more or less classical approaches to deep unsupervised segmentation: cluster your embeddings and use it as pseudo labels. Add tricks, repeat multiple times. In this paper the authors made one step further to unite it with self-supervision. They designed a loss function to enforce invariance of these clustered representation to the color augmentations and equivariance to the spatial augmentations.
The algorithm of the loss calculation is:
1. Get two representations of the same image. Both disturbed with different color augmentations but with same spatial augmentation. in first case image is disturbed before going through the network and in second — output of the network is disturbed. So, ideally both outputs should be identical, it will show invariance and equivariance to color and spatial augmentations respectively. I will name these representations
z1
and z2
.2. For each of those outputs, run KMeans clustering of the embeddings. I will name obtained centroids
µ1
and µ2
.3. The next step is going to finally mix those two spaces. Let say that
L(z, µ)
is a loss, that for each vector in z
brings it closer to the nearest vector of µ
. (prototype learning waves). Then:3.1. We enforce clustering in each representation with
L(z1, µ1) + L(z2, µ2)
.3.2. We enforce that this clustering itself should hold across the representations with
L(z1, µ2) + L(z2, µ1)
.And that's it. Training with this approach achieves SoTA on the unsupervised segmentation and shows qualitatively good object masks. The most improved part of the segmentation is thing (foreground object) segmentation, which is systematically problematic for unsupervised learning, because of the huge imbalance in class sizes.
More here.
Source here.
Ярослав's Notion on Notion
PiCIE Unsupervised Semantic Segmentation using Invariance and Equivariance in Clustering
The proposed approach on one hand falls in line with many approaches for semantic segmentation training based on clustering (e.g. DeepCluster). Although, unlike these approaches, authors propose not to rely solely on the clustering iterative improvement.…
Lilian Weng's description of the contrastive learning is a good way to have a quick dive intro.
Lil'Log
Contrastive Representation Learning
MERLIN: another example of how important it is to know your data.
Imaging with SAR (Synthetic Aperture Radar) introduces a very specific type of noise, called speckles. The problem with training a denoising model for this case is obtaining very specific data. It should be the same data from point of view of information but should have decorrelated noise, which can cost a lot in areas where SAR is applied, e.g. in satellite imagery.
The authors proposed to utilize the structure of the image received by SAR. These images are obtained as a pair of values per pixel: amplitude and phase. Typically, phase is considered non-important for the imaging, as amplitude is of interest.
Authors demonstrate, that using the statistical model of the speckle noise, they can extract two noisy images from this information, having the same information, but with the different but i.i.d. noise. This way, authors can apply the Noise2Noise framework to this case, and train a NN to predict true non-noisy amplitude.
It allows training a neural network for each detector specifically, without the requirement to obtain expensive training data or to construct an artificial one.
Source: here
Imaging with SAR (Synthetic Aperture Radar) introduces a very specific type of noise, called speckles. The problem with training a denoising model for this case is obtaining very specific data. It should be the same data from point of view of information but should have decorrelated noise, which can cost a lot in areas where SAR is applied, e.g. in satellite imagery.
The authors proposed to utilize the structure of the image received by SAR. These images are obtained as a pair of values per pixel: amplitude and phase. Typically, phase is considered non-important for the imaging, as amplitude is of interest.
Authors demonstrate, that using the statistical model of the speckle noise, they can extract two noisy images from this information, having the same information, but with the different but i.i.d. noise. This way, authors can apply the Noise2Noise framework to this case, and train a NN to predict true non-noisy amplitude.
It allows training a neural network for each detector specifically, without the requirement to obtain expensive training data or to construct an artificial one.
Source: here
👍1
How Useful is Self-Supervised Pretraining for Visual Tasks?
A relatively old paper (CVPR2020), by our fast life standards. Nevertheless, it has a pair of practical takeaways.
Authors created a synthetic dataset with several degrees of freedom to vary difficulty. It varies from almost monochrome objects to randomized textures and positioning on image.
The target was to compare how good different self-supervised approaches help to tune for different downstream tasks. From classification to depth estimation.
Two practical takeways are:
1. The self-supervised method utility is wildly dependent on task, markup amount and even data complexity.
2. A linear evaluation score, so popular in papers, has almost no correlation with actual fine-tuning results.
Authors found out, that there is no improvement by self-supervised training when lots of labeled data presented (which became kinda well known since then). Based on this, they hypothesise, that improvement of SSL pre-training is rather kind of a regularization than optimization. That is, SSL pre-training helps to find wider optimum, not better. Though, to claim this, some kind of loss plane investigation would be more helpful.
Source: here
A relatively old paper (CVPR2020), by our fast life standards. Nevertheless, it has a pair of practical takeaways.
Authors created a synthetic dataset with several degrees of freedom to vary difficulty. It varies from almost monochrome objects to randomized textures and positioning on image.
The target was to compare how good different self-supervised approaches help to tune for different downstream tasks. From classification to depth estimation.
Two practical takeways are:
1. The self-supervised method utility is wildly dependent on task, markup amount and even data complexity.
2. A linear evaluation score, so popular in papers, has almost no correlation with actual fine-tuning results.
Authors found out, that there is no improvement by self-supervised training when lots of labeled data presented (which became kinda well known since then). Based on this, they hypothesise, that improvement of SSL pre-training is rather kind of a regularization than optimization. That is, SSL pre-training helps to find wider optimum, not better. Though, to claim this, some kind of loss plane investigation would be more helpful.
Source: here
👍2
Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration
from NeurIPS2021.
It was already noted, that quality of the contrastive learning may suffer from intense augmentation. In this paper, the authors make one step further and try to understand the source of this.
The main hypothesis is, if augmentations are too intense, the assumption of invariance of the image information to augmentation just breaks. That is, we augment images so hard, that it isn't meaningful to ask a model to predict close embeddings for such different inputs.
To mitigate this, the authors proposed to model a distribution of the embeddings of views (positive samples, different augmentations of the same image) as a normal distribution with a shared covariance matrix (experiments show that shared covariance matrix is somehow very effective). And then add weight to each component of the loss with a normalized distance between two views which are pulled together in this component. The distance here is Mahalanobis distance defined by the fitted distribution.
To put it simpler: if two positive samples are too far away from each other, maybe they are not so positive after all?
This makes contrastive methods to not over relate on the assumption of the invariance to augmentation. And also makes them more aware of what happens in the embedded space itself.
Authors demonstrate consistent improvement for different contrastive losses.
source: here
from NeurIPS2021.
It was already noted, that quality of the contrastive learning may suffer from intense augmentation. In this paper, the authors make one step further and try to understand the source of this.
The main hypothesis is, if augmentations are too intense, the assumption of invariance of the image information to augmentation just breaks. That is, we augment images so hard, that it isn't meaningful to ask a model to predict close embeddings for such different inputs.
To mitigate this, the authors proposed to model a distribution of the embeddings of views (positive samples, different augmentations of the same image) as a normal distribution with a shared covariance matrix (experiments show that shared covariance matrix is somehow very effective). And then add weight to each component of the loss with a normalized distance between two views which are pulled together in this component. The distance here is Mahalanobis distance defined by the fitted distribution.
To put it simpler: if two positive samples are too far away from each other, maybe they are not so positive after all?
This makes contrastive methods to not over relate on the assumption of the invariance to augmentation. And also makes them more aware of what happens in the embedded space itself.
Authors demonstrate consistent improvement for different contrastive losses.
source: here
👍1
Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere
from the ICML2020.
Previously it was noted, that if one swaps contrastive loss with a tighter bound on MI, the downstream quality decreases. The authors propose, therefore, to move from InfoMax intuition to rather simple concepts: alignment and uniformity. The former enforces that positive pairs stay as close as possible and the latter enforces that all samples stay as evenly distributed as possible.
These components are empirically important for downstream performance. Furthermore, their direct optimization may outperform the classical contrastive loss training.
With images and a bit longer: here
Source: here
from the ICML2020.
Previously it was noted, that if one swaps contrastive loss with a tighter bound on MI, the downstream quality decreases. The authors propose, therefore, to move from InfoMax intuition to rather simple concepts: alignment and uniformity. The former enforces that positive pairs stay as close as possible and the latter enforces that all samples stay as evenly distributed as possible.
These components are empirically important for downstream performance. Furthermore, their direct optimization may outperform the classical contrastive loss training.
With images and a bit longer: here
Source: here
Telegraph
Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere
The work from the ICML2020 goes deeper in the understanding of contrastive learning. Authors diverged from a proposal that contrastive loss maximizes the mutual information between the positive views because it was shown that optimizing tighter bound on MI…