Self Supervised Boy
160 subscribers
9 photos
56 links
Posting links to papers I read. Right now I'm mostly interested in things around LLMs, AI agents, and ML4Code. That is subject to change.

@martolod
Download Telegram
MERLIN: another example of how important it is to know your data.

Imaging with SAR (Synthetic Aperture Radar) introduces a very specific type of noise, called speckles. The problem with training a denoising model for this case is obtaining very specific data. It should be the same data from point of view of information but should have decorrelated noise, which can cost a lot in areas where SAR is applied, e.g. in satellite imagery.

The authors proposed to utilize the structure of the image received by SAR. These images are obtained as a pair of values per pixel: amplitude and phase. Typically, phase is considered non-important for the imaging, as amplitude is of interest.
Authors demonstrate, that using the statistical model of the speckle noise, they can extract two noisy images from this information, having the same information, but with the different but i.i.d. noise. This way, authors can apply the Noise2Noise framework to this case, and train a NN to predict true non-noisy amplitude.

It allows training a neural network for each detector specifically, without the requirement to obtain expensive training data or to construct an artificial one.

Source: here
πŸ‘1
How Useful is Self-Supervised Pretraining for Visual Tasks?

A relatively old paper (CVPR2020), by our fast life standards. Nevertheless, it has a pair of practical takeaways.

Authors created a synthetic dataset with several degrees of freedom to vary difficulty. It varies from almost monochrome objects to randomized textures and positioning on image.

The target was to compare how good different self-supervised approaches help to tune for different downstream tasks. From classification to depth estimation.

Two practical takeways are:
1. The self-supervised method utility is wildly dependent on task, markup amount and even data complexity.
2. A linear evaluation score, so popular in papers, has almost no correlation with actual fine-tuning results.

Authors found out, that there is no improvement by self-supervised training when lots of labeled data presented (which became kinda well known since then). Based on this, they hypothesise, that improvement of SSL pre-training is rather kind of a regularization than optimization. That is, SSL pre-training helps to find wider optimum, not better. Though, to claim this, some kind of loss plane investigation would be more helpful.

Source: here
πŸ‘2
Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration
from NeurIPS2021.

It was already noted, that quality of the contrastive learning may suffer from intense augmentation. In this paper, the authors make one step further and try to understand the source of this.

The main hypothesis is, if augmentations are too intense, the assumption of invariance of the image information to augmentation just breaks. That is, we augment images so hard, that it isn't meaningful to ask a model to predict close embeddings for such different inputs.

To mitigate this, the authors proposed to model a distribution of the embeddings of views (positive samples, different augmentations of the same image) as a normal distribution with a shared covariance matrix (experiments show that shared covariance matrix is somehow very effective). And then add weight to each component of the loss with a normalized distance between two views which are pulled together in this component. The distance here is Mahalanobis distance defined by the fitted distribution.

To put it simpler: if two positive samples are too far away from each other, maybe they are not so positive after all?

This makes contrastive methods to not over relate on the assumption of the invariance to augmentation. And also makes them more aware of what happens in the embedded space itself.

Authors demonstrate consistent improvement for different contrastive losses.
source: here
πŸ‘1
Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere
from the ICML2020.

Previously it was noted, that if one swaps contrastive loss with a tighter bound on MI, the downstream quality decreases. The authors propose, therefore, to move from InfoMax intuition to rather simple concepts: alignment and uniformity. The former enforces that positive pairs stay as close as possible and the latter enforces that all samples stay as evenly distributed as possible.

These components are empirically important for downstream performance. Furthermore, their direct optimization may outperform the classical contrastive loss training.

With images and a bit longer: here
Source: here
Well, there was more than three years since the last post here. In these three years a lot has changed. I'm done with my PhD in Heidelberg Uni, and moved on to JetBrains to lead a team on AI agents. With all this on my hands, I will have even less time for writing the reviews I'd like to read. But on the other hand, I'd still like to share the papers I read.

So, instead, I will post here links to the papers that I read. You can view this experiment as copycatting the @j_links but with a bias towards LLMs and probably agents.
πŸ”₯10πŸ‘5