Data Science by ODS.ai 🦜
50K subscribers
391 photos
43 videos
7 files
1.54K links
First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @haarrp
Download Telegram
DensePose: Dense Human Pose Estimation In The Wild

Facebook AI Research group presented a paper on pose estimation. That will help Facebook with better understanding of the processed videos.

NEW: DensePose-COCO, a large-scale ground-truth dataset with image-to-surface correspondences manually annotated on 50K COCO images.

Project website: http://densepose.org/
Arxiv: https://arxiv.org/abs/1802.00434

#facebook #fair #cvpr #cv #CNN #dataset
Faster R-CNN and Mask R-CNN in #PyTorch 1.0

Another release from #Facebook.

Mask R-CNN Benchmark: a fast and modular implementation for Faster R-CNN and Mask R-CNN written entirely in @PyTorch 1.0. It brings up to 30% speedup compared to mmdetection during training.

Webcam demo and ipynb file are available.

Github: https://github.com/facebookresearch/maskrcnn-benchmark

#CNN #CV #segmentation #detection
How Many Samples are Needed to Learn a Convolutional Neural Network

Article questioning fact that CNNs use a more compact representation than the Fully-connected Neural Network (FNN) and thus require fewer training samples to accurately estimate their parameters.

ArXiV: https://arxiv.org/abs/1805.07883

#CNN #nn
​​Object detection SOTA evolution over time.

#SOTA #CNN #objectdetection
"Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet"

A "bag of words" of nets on tiny 17x17 patches suffice to reach AlexNet-level performance on ImageNet. A lot of the information is very local.

Paper: https://openreview.net/forum?id=SkfMWhAqYQ

#fun #CNN #CV #ImageNet
DGC-Net: Dense Geometric Correspondence network

Paper addresses the challenge of dense pixel correspondence estimation between two images. Practically, this means that it is about comparing different views of one object, which is very important to make #CV more robust.

ArXiV: https://arxiv.org/abs/1810.08393
Github: https://github.com/AaltoVision/DGC-Net
YouTube: https://www.youtube.com/watch?v=xnQMEr4FbHE
Project page: https://aaltovision.github.io/dgc-net-site/

#CNN #DL
​​Using ‘radioactive data’ to detect if a data set was used for training

The authors have developed a new technique to mark the images in a data set so that researchers can determine whether a particular machine learning model has been trained using those images. This can help researchers and engineers to keep track of which data set was used to train a model so they can better understand how various data sets affect the performance of different neural networks.

The key points:
- the marks are harmless and have no impact on the classification accuracy of models, but are detectable with high confidence in a neural network;
- the image features are moved in a particular direction (the carrier) that has been sampled randomly and independently of the data
- after a model is trained on such data, its classifier will align with the direction of the carrier
- the method works in such a way that it is difficult to detect whether a data set is radioactive and to remove the marks from the trained model.

blogpost: https://ai.facebook.com/blog/using-radioactive-data-to-detect-if-a-data-set-was-used-for-training/
paper: https://arxiv.org/abs/2002.00937

#cv #cnn #datavalidation #image #data
​​Are Pre-trained Convolutions Better than Pre-trained Transformers?

In this paper, the authors from Google Research wanted to investigate whether CNN architectures can be competitive compared to transformers on NLP problems. It turns out that pre-trained CNN models outperform pre-trained Transformers on some tasks; they also train faster and scale better to longer sequences.

Overall, the findings outlined in this paper suggest that conflating pre-training and architectural advances is misguided and that both advances should be considered independently. The authors believe their research paves the way for a healthy amount of optimism in alternative architectures.

Paper: https://arxiv.org/abs/2105.03322

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-cnnbettertransformers

#nlp #deeplearning #cnn #transformer #pretraining