Spark in me
2.31K subscribers
692 photos
44 videos
114 files
2.6K links
Lost like tears in rain. DS, ML, a bit of philosophy and math. No bs or ads.
Download Telegram
Serialization of large objects in Python

So far found no sane way for this with 1M chunks / 10GB+ object size.

Of course, chunking / plain txt works.

Feather / parquet - fail with 2+GB size.
Pickle works, but it is kind of slow.

=(

#data_science
Jupiter widgets + pandas

https://towardsdatascience.com/interactive-controls-for-jupyter-notebooks-f5c94829aee6

With the @interact decorator, the IPywidgets library automatically gives us a text box and a slider for choosing a column and number! It looks at the inputs

Amazing.

#data_science
Forwarded from Анна
Checked out sentence embeddings in LASER:
- installation guide is a bit messy
- works on FAISS lib, performance is pretty fast ( <1 minute to encode 250k sentences on 1080Ti)
- better generalization comparing to ft baseline. A difference is clear even for small sentences: 'добрый день!' and 'здравствуйте!' embeddings are much closer in LASER's space than in ft
- looks like LASER embeddings is more about similarity, not only substitutability and better in synonym's recognition
- seems to work better on short sentences
Old news ... but Attention works

Funny enough, but in the past my models :
- Either did not need attention;
- Attention was implemented by @thinline72 ;
- The domain was so complicated (NMT) so that I had to resort to boilerplate with key-value attention;

It was the first time I / we tried manually building a model with plain self attention from scratch.

An you know - it really adds 5-10% to all of the tracked metrics.

Best plain attention layer in PyTorch - simple, well documented ... and it works in real life applications:
https://gist.github.com/cbaziotis/94e53bdd6e4852756e0395560ff38aa4

#nlp
#deep_learning
PyTorch NLP best practices

Very simple ideas, actually.

(1) Multi GPU parallelization and FP16 training

Do not bother reinventing the wheel.
Just use nvidia's apex, DistributedDataParallel, DataParallel.
Best examples [here](https://github.com/huggingface/pytorch-pretrained-BERT).

(2) Put as much as possible INSIDE of the model

Implement the as much as possible of your logic inside of nn.module.
Why?
So that you can seamleassly you all the abstractions from (1) with ease.
Also models are more abstract and reusable in general.

(3) Why have a separate train/val loop?

PyTorch 0.4 introduced context handlers.

You can simplify your train / val / test loops, and merge them into one simple function.

context = torch.no_grad() if loop_type=='Val' else torch.enable_grad()

if loop_type=='Train':
model.train()
elif loop_type=='Val':
model.eval()

with context:
for i, (some_tensor) in enumerate(tqdm(train_loader)):
# do your stuff here
pass

(4) EmbeddingBag

Use EmbeddingBag layer for morphologically rich languages. Seriously!

(5) Writing trainers / training abstractions

This is waste of time imho if you follow (1), (2) and (3).

(6) Nice bonus

If you follow most of these, you can train on as many GPUs and machines as you wan for any language)

(7) Using tensorboard for logging

This goes without saying.

#nlp
#deep_learning
PyTorch DataLoader, GIL thrashing and CNNs

Well all of this seems a bit like magic to me, but hear me out.

I abused my GPU box for weeks running CNNs on 2-4 GPUs.
Nothing broke.
And then my GPU box started shutting down for no apparent reason.

No, this was not:
- CPU overheating (I have a massive cooler, I checked - it works);
- PSU;
- Overclocking;
- It also adds to confusion that AMD has weird temperature readings;

To cut the story short - if you have a very fast Dataset class and you use PyTorch's DataLoader with workers > 0 it can lead to system instability instead of speeding up.

It is obvious in retrospect, but it is not when you face this issue.

#deep_learning
#pytorch
*
(2) is valid for models with complex forward pass and models with large embedding layers
Pinned post

What is this channel about?
(0)
This channel is a practitioner's channel on the following topics: Internet, Data Science, Deep Learning, Python, NLP

(1)
Don't get your opinion in a twist if your opinion differs.
You are welcome to contact me via telegram @snakers41 and email - aveysov@gmail.com

(2)
No BS and ads - I already rejected 3-4 crappy ad deals

(4)
DS ML digests - in the RSS or via URLs like this
https://spark-in.me/post/2019_ds_ml_digest_01

Donations
(0)
Buy me a coffee 🤟 https://buymeacoff.ee/8oneCIN

Give us a rating:
(0)
https://telegram.me/tchannelsbot?start=snakers4

Our chat
(0)
https://t.me/joinchat/Bv9tjkH9JHYvOr92hi5LxQ

More links
(0)
Our website http://spark-in.me

(1)
Our chat https://t.me/joinchat/Bv9tjkH9JHYvOr92hi5LxQ

(2)
DS courses review (RU) - very old
http://goo.gl/5VGU5A
https://spark-in.me/post/learn-data-science

(3)
2017 - 2018 SpaceNet Challenge
https://spark-in.me/post/spacenet-three-challenge

(4)
DS Bowl 2018
https://spark-in.me/post/playing-with-dwt-and-ds-bowl-2018

(7)
Data Science tag on the website
https://spark-in.me/tag/data-science

(7)
Profi.ru project
http://towardsdatascience.com/building-client-routing-semantic-search-in-the-wild-14db04687c7e

(8)
CFT 2018 competition
https://spark-in.me/post/cft-spelling-2018

(9)
2018 retrospective
https://spark-in.me/post/2018

More amazing NLP-related articles incoming!
Maybe finally we will make podcasts?
Spark in me pinned «Pinned post What is this channel about? (0) This channel is a practitioner's channel on the following topics: Internet, Data Science, Deep Learning, Python, NLP (1) Don't get your opinion in a twist if your opinion differs. You are welcome to contact…»
A bit of lazy Sunday admin stuff

Monitoring you CPU temperature with email notifications

- Change CPU temp to any metric you like
- Rolling log
- Sending email only one time, if the metric becomes critical (you can add an email when metric becomes non-critical again)

https://gist.github.com/snakers4/cf0ffd57c3ef7f4e2e25f6b3347dcdec

Setting up a GPU box on Ubuntu 18.04 from scratch

https://github.com/snakers4/gpu-box-setup/


#deep_learning
#linux
4th 2019 DS / ML digest

Highlights of the week
- OpenAI controversy;
- BERT pre-training;
- Using transformer for conversational challenges;

https://spark-in.me/post/2019_ds_ml_digest_04

#digest
#data_science
#deep_learning
New variation of Adam?

- [Website](https://www.luolc.com/publications/adabound/);
- [Code](https://github.com/Luolc/AdaBound);
- Eliminate the generalization gap between adaptive methods and SGD;
- TL;DR: A Faster And Better Optimizer with Highly Robust Performance;
- Dynamic bound on learning rates. Inspired by gradient clipping;
- Not very sensitive to the hyperparameters, especially compared with Sgd(M);
- Tested on MNIST, CIFAR, Penn Treebank - no serious datasets;

#deep_learning
We tried it

... yeah we tried it on a real task
just adam is a bit better