Data Science by ODS.ai 🦜
51.5K subscribers
322 photos
29 videos
7 files
1.49K links
First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @haarrp
Download Telegram
Do Better ImageNet Models Transfer Better?

Finding: better ImageNet architectures tend to work better on other datasets too. Surprise: pretraining on ImageNet dataset sometimes doesn't help very much.

ArXiV: https://arxiv.org/abs/1805.08974

#ImageNet #finetuning #transferlearning
​​Understanding Transfer Learning for Medical Imaging

ArXiV: https://arxiv.org/abs/1902.07208

#biolearning #dl #transferlearning
​​VirTex: Learning Visual Representations from Textual Annotations

The authors offer an alternative approach to pre-training backbones for CV tasks – using semantically dense captions to learn visual representations.

Recent methods have explored unsupervised pretraining to scale to vast quantities of unlabeled images. In contrast, the authors aim to learn high-quality visual representations from fewer images. They revisit supervised pretraining and seek data-efficient alternatives to classification-based pretraining.

VirTex (CNN + Transformer) is pre-trained on COCO captions. On downstream tasks it can reach performance similar to pre-training on ImageNet, but with 10x less images!


Paper: https://arxiv.org/abs/2006.06666
Code: https://github.com/kdexd/virtex
Site: https://kdexd.github.io/virtex/

#imagecaptioning #cv #visual #annotation #transformer #pretraining #transferlearning #deeplearning #paper
​​ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network

The authors propose a set of design principles that improves model performance significantly based on the analysis of representation bottlenecks.

Authors think that commonly used architectures have a representation bottleneck and try to fix it by expanding channel size, using more expand layers, and better activation functions. This also improves the performance of models on ImageNet and good results on transfer learning on classification and object detection.
Authors hope that their design ideas could be used by NAS to create even better models.


Paper: https://arxiv.org/abs/2007.00992
Code: https://github.com/clovaai/rexnet

#deeplearning #pretraining #transferlearning #computervision #pytorch