Spark in me
2.28K subscribers
637 photos
42 videos
114 files
2.55K links
Lost like tears in rain. DS, ML, a bit of philosophy and math. No bs or ads.
Download Telegram
A subscriber sent a really decent CS university scientific ranking

http://csrankings.org/#/index?all&worldpu

Useful, if you want to apply for CS/ML based Ph.D. there

#deep_learning
If someone needs a dataset, Kaggle launched ImageNet object detection
- https://www.kaggle.com/c/imagenet-object-localization-challenge#description

There is an open images dataset, which I guess is bigger though

#deep_learning
2018 DS/ML digest 13

Blog posts / articles:
(0) Google notes on CNN generalization - https://goo.gl/XS4KAw
(1) Google to teaching robots in virtual environment and then trasferring models to reality - https://goo.gl/aAYCqE
(2) Google's object tracking via image colorization - https://goo.gl/xchvBQ
(2) Interesting articles about VAEs:
- A small intro into VAEs https://habr.com/company/otus/blog/358946/
- A small intuitive intro (super super cool and intuitive)
https://towardsdatascience.com/intuitively-understanding-variational-autoencoders-1bfe67eb5daf
- KL divergence explained
https://www.countbayesie.com/blog/2017/5/9/kullback-leibler-divergence-explained
- A more formal write-up http://arxiv.org/abs/1606.05908
- In (RU) https://habr.com/company/otus/blog/358946/
- Converting a FC layer into a conv layer http://cs231n.github.io/convolutional-networks/#convert
- A post by Fchollet https://blog.keras.io/building-autoencoders-in-keras.html

A good in-depth write-up on object detection:
- http://machinethink.net/blog/object-detection/
- finally a decent explanation of YOLO parametrization http://machinethink.net/images/object-detection/grid@2x.png
- best comparison of YOLO and SSD ever - http://machinethink.net/images/object-detection/architectures@2x.png


Papers with interesting abstracts (just good to know sich things exist)
- Low-bit CNNs - https://ai.intel.com/nervana/wp-content/uploads/sites/53/2018/06/ELQ_CameraReady_CVPR2018.pdf
- Automated Meta ML - https://arxiv.org/abs/1806.06927
- Idea - use ResNet blocks for boosting - https://arxiv.org/abs/1706.04964
- 2D-discrete-Fourier transform (2D-DFT) to encode rotational invariance in neural networks - https://arxiv.org/abs/1805.12301
- Smallify the CNNs - https://arxiv.org/abs/1806.03723
- BLEU review as a metric - conclusion - it is good on average to measure MT performance - https://www.mitpressjournals.org/doi/abs/10.1162/COLI_a_00322


"New" ideas in SemSeg:
- UNET + conditional VAE http://arxiv.org/abs/1806.05034
- Dilated convolutions for larget satellite images http://arxiv.org/abs/1709.00179 - looks like that this works only if you have high resolution with small objects

#digest
#deep_learning
DL Framework choice - 2018

If you are still new to DL / DS / ML and have not yet chosen your framework, consider reading this before proceeding

- https://deepsense.ai/keras-or-pytorch/

#deep_learning
Playing with PyTorch 0.4

It was released some time ago
If you are not aware - this is the best summary
https://pytorch.org/2018/04/22/0_4_0-migration-guide.html

My first-hand experiences
- Multi-GPU support works strangely
- If you just launch your 0.3 code it will work on 0.4 with warnings - not a really breaking change
- All the new features are really cool, useful and make using PyTorch even more delightful
- I especially liked how they added context managers and cleaned up the device mess

#deep_learning
Measuring feature importance properly

http://explained.ai/rf-importance/index.html

Once again stumbled upon an amazing article about measuring feature importance for any ML algorithms:
(0) Permutation importance - if your ML algorithm is costly, then you can just shuffle a column and check importance
(1) Drop column importance - drop a column, re-train a model, check performance metrics

Why it is useful / caveats
(0) If you really care about understanding your domain - feature importances are a must have
(1) All of this works only for powerful models
(2) Landmines include - correlated or duplicate variables, data normalization

Correlated variables
(0) For RF - correlated variables share permutation importance roughly proportionally to their correlation
(1) Drop column importance can behave unpredictably

I personally like engineering different kinds of features and doing ablation tests:
(0) Among feature sets, sharing similar purpose
(1) Within feature sets

#data_science
2018 DS/ML digest 14

Amazing article - why you do not need ML
- https://cyberomin.github.io/startup/2018/07/01/sql-ml-ai.html
- I personally love plain-vanilla SQL and in 90% of cases people under-use it
- I even wrote 90% of my JSON API on our blog in pure PostgreSQL xD

Practice / papers
(0) Interesting papers from CVPR https://towardsdatascience.com/the-10-coolest-papers-from-cvpr-2018-11cb48585a49
(1) Some down-to-earth obstacles to ML deploy https://habr.com/company/hh/blog/415437/
(2) Using synthetic data for CNNs (by Nvidia) - https://arxiv.org/pdf/1804.06516.pdf
(3) This puzzles me - so much effort and engineering spent on something ... strange and useless - http://taskonomy.stanford.edu/index.html
On paper they do a cool thing - investigate transfer learning between different domains, but in practice it is done on TF and there is no clear conclusion of any kind
(4) VAE + real datasets http://siavashk.github.io/2016/02/22/autoencoder-imagenet/ - only small Imagenet (64x64)
(5) Understanding the speed of models deployed on mobile - http://machinethink.net/blog/how-fast-is-my-model/
(6) A brief overview of multi-modal methods https://medium.com/mlreview/multi-modal-methods-image-captioning-from-translation-to-attention-895b6444256e

Visualizations / explanations
(0) Amazing website with ML explanations http://explained.ai/
(1) PCA and linear VAEs are close https://pvirie.wordpress.com/2016/03/29/linear-autoencoders-do-pca/

#deep_learning
#digest
#data_science
Open Images Object detection on Kaggle

- https://www.kaggle.com/c/google-ai-open-images-object-detection-track#Description

- Key ideas
-- 1.2 images, high-res, 500 classes
-- decent prizes, but short time-span (2 months)
-- object detection

#deep_learning
2018 DS/ML digest 15

What I filtered through this time

Market / news
(0) Letters by big company employees against using ML for weapons
- Microsoft
- Amazon
(1) Facebook open sources Dense Pose (eseentially this is Mask-RCNN)
- https://research.fb.com/facebook-open-sources-densepose/

Papers / posts / NLP
(0) One more blog post about text / sentence embeddings https://goo.gl/Zm8C2c
- key idea different weighting

(1) One more sentence embedding calculation method
- https://openreview.net/pdf?id=SyK00v5xx ?

(2) Posts explaing NLP embeddings
- http://www.offconvex.org/2015/12/12/word-embeddings-1/ - some basics - SVD / Word2Vec / GloVe
-- SVD improves embedding quality (as compared to ohe)?
-- use log-weighting, use TF-IDF weighting (the above weighting)
- http://www.offconvex.org/2016/02/14/word-embeddings-2/ - word embedding properties
-- dimensions vs. embedding quality http://www.cs.princeton.edu/~arora/pubs/LSAgraph.jpg

(3) Spacy + Cython = 100x speed boost - https://goo.gl/9TwVqu - good to know about this as a last resort
- described use-case
you are pre-processing a large training set for a DeepLearning framework like pyTorch/TensorFlow
or you have a heavy processing logic in your DeepLearning batch loader that slows down your training

(4) Once again stumbled upon this - https://blog.openai.com/language-unsupervised/

(5) Papers
- Simple NLP embedding baseline https://goo.gl/nGujzS
- NLP decathlon for question answering https://goo.gl/6HHi7q
- Debiasing embeddings https://arxiv.org/abs/1806.06301
- Once again transfer learning in NLP by open-AI - https://goo.gl/82VR4U

#deep_learning
#digest
#data_science
Forwarded from SK