Data Science by ODS.ai 🦜
51.8K subscribers
313 photos
29 videos
7 files
1.48K links
First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @haarrp
Download Telegram
​​Neural Networks seem to follow a puzzlingly simple strategy to classify images

Interesting article on how actually #NN see images and what helps to distinct different classes.

Link: https://medium.com/bethgelab/neural-networks-seem-to-follow-a-puzzlingly-simple-strategy-to-classify-images-f4229317261f

#BagNet #ResNet #Dl #CV
Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening

A deep convolutional neural network for breast cancer screening exam classification, trained and evaluated on over 200,000 exams (over 1,000,000 images). #nn achieves an #AUC of 0.895 in predicting whether there is a cancer in the breast, when tested on the screening population.

Link: https://arxiv.org/abs/1903.08297

#cv #dl #cancer #objectdetection
A Recipe for Training Neural Networks by Andrej Karpathy

New article written by Andrej Karpathy distilling a bunch of useful heuristics for training neural nets. The post is full of real-world knowledge and how-to details that are not taught in books and often take endless hours to learn the hard way.

Link: https://karpathy.github.io/2019/04/25/recipe/

#tipsandtricks #karpathy #tutorial #nn #ml #dl
The lottery ticket hypothesis: finding sparse, trainable neural networks

Best paper award at #ICLR2019 main idea: dense, randomly-initialized, networks contain sparse subnetworks that trained in isolation reach test accuracy comparable to the original network. Thus compressing the original network up to 10% its original size.

Paper: https://arxiv.org/pdf/1803.03635.pdf

#nn #research
Yet another good intro into difference between artificial neural network and biological one.

If you're getting started in Data Science, you need to start with the basic building building block of Neural Networks - a Perceptron. To understand what it is, there's this good link to get started with.

Link: https://towardsdatascience.com/the-differences-between-artificial-and-biological-neural-networks-a8b46db828b7

#nn #entrylevel #beginner
​​πŸ₯‡Parameter optimization in neural networks.

Play with three interactive visualizations and develop your intuition for optimizing model parameters.

Link: https://www.deeplearning.ai/ai-notes/optimization/

#interactive #demo #optimization #parameteroptimization #novice #entrylevel #beginner #goldcontent #nn #neuralnetwork
​​And the Bit Goes Down: Revisiting the Quantization of Neural Networks

Researchers at Facebook AI Research found a way to compress neural networks with minimal sacrifice in accuracy.

Works only on fully connected and CNN only for now.

Link: https://arxiv.org/abs/1907.05686

#nn #DL #minimization #compresson
27,600 V100 GPUs, 0.5 PB data, and a neural net with 220,000,000 weights

If you wonder, it all was used to address scientific inverse problem in materials imaging.

ArXiV: https://arxiv.org/pdf/1909.11150.pdf

#ItIsNotAboutSize #nn #dl
Applying deep learning to Airbnb search

Story of how #Airbnb research team moved from using #GBDT (gradient boosting) to #NN (neural networks) for search, with all the metrics and hypothesises.

Link: https://blog.acolyer.org/2019/10/09/applying-deep-learning-to-airbnb-search/
Two papers stating random architecture search is a competitive (in some cases superior) baseline for NAS methods.

These are papers demonstrating that Neural Architecture Search can be stohastic.

Paper 1: https://arxiv.org/abs/1902.08142
Paper 2: https://arxiv.org/abs/1902.07638

#NAS #nn #DL
Free eBook from Stanford: Introduction to Applied Linear Algebra – Vectors, Matrices, and Least Squares

Base material you need to understand how neural networks and other #ML algorithms work.

Link: https://web.stanford.edu/~boyd/vmls/

#Stanford #MOOC #WhereToStart #free #ebook #algebra #linalg #NN
​​Characterising Bias in Compressed Models

Popular compression techniques turned out to amplify bias in deep neural networks.

ArXiV: https://arxiv.org/abs/2010.03058

#NN #DL #bias
Towards Causal Representation Learning

Work on how neural networks derive casual variables from low-level observations.

Link: https://arxiv.org/abs/2102.11107

#casuallearning #bengio #nn #DL
Deep Neural Nets: 33 years ago and 33 years from now

Great post by Andrej Karpathy on the progress #CV made in 33 years.

Author's ideas on what would a time traveler from 2055 think about the performance of current networks:

* 2055 neural nets are basically the same as 2022 neural nets on the macro level, except bigger.
* Our datasets and models today look like a joke. Both are somewhere around 10,000,000X larger.
* One can train 2022 state of the art models in ~1 minute by training naively on their personal computing device as a weekend fun project.
* Today’s models are not optimally formulated, and just changing some of the details of the model, loss function, augmentation or the optimizer we can about halve the error.
* Our datasets are too small, and modest gains would come from scaling up the dataset alone.
* Further gains are actually not possible without expanding the computing infrastructure and investing into some R&D on effectively training models on that scale.


Website: https://karpathy.github.io/2022/03/14/lecun1989/
OG Paper link: http://yann.lecun.com/exdb/publis/pdf/lecun-89e.pdf

#karpathy #archeology #cv #nn