Spark in me
2.31K subscribers
670 photos
44 videos
114 files
2.59K links
Lost like tears in rain. DS, ML, a bit of philosophy and math. No bs or ads.
Download Telegram
2019 DS / ML digest 16

Link

Highlights of the week(s):

- Finally a 10x smaller Transfomer - but it starts to look like a RNN inspired model;
- Deep fake detection dataset;
- Paraphrase dataset;
- Deconstructing the convolution - in essence, you just need a shift operator + a 1x1 mix convolution. Such things are not mainstream yet;

#digest
#deep_learning
Spark in me
Managing your DS / ML environment neatly and in style If you have a sophisticated environment that you need to do DS / ML / DL, then using a set of Docker images may be a good idea. You can also tap into a vast community of amazing and well-maintained Dockerhub…
PyTorch 1.2 update

So, I updated my DS / ML environment to use PyTorch 1.2 =)

(0) Basic DS / ML layer - FROM aveysov/ml_images:layer-0-pt12 / dockerfile;
(1) DS / ML libraries - FROM aveysov/ml_images:layer-1 / dockerfile;

Your final dockerfile may look something like this just pulling from any of those layers.
Note that when building this, you will need to pass your UID as a variable, e.g.:

docker build --build-arg NB_UID=1000 -t av_final_layer -f Layer_final.dockerfile .
AllenNLP ... does not use their library when doing such PR stunts?
Oh ...I wonder why

But PyTorch + TPU seems to be one love, if it works =)
Forwarded from DL in NLP (nlpcontroller_bot)
PyTorch XLA потихоньку оживает. Скоро можно будет тренировать языковые модели за несколько часов на 🔥+TPU


At last, language model pretraining with PyTorch+TPUs https://github.com/allenai/tpu_pretrain

Our code trains PyTorch BERT/RoBERTa on TPUs, which is faster and cheaper than GPUs.

Also check the repo for a more detailed comparison between TPUs/GPUs on PyTorch/Tensorflow.


https://twitter.com/i_beltagy/status/1181320500783415296
Spark in me
PyTorch 1.2 update So, I updated my DS / ML environment to use PyTorch 1.2 =) (0) Basic DS / ML layer - FROM aveysov/ml_images:layer-0-pt12 / dockerfile; (1) DS / ML libraries - FROM aveysov/ml_images:layer-1 / dockerfile; Your final dockerfile may look…
Also this is amazing for teams of 5-10 people tops.
If you work on the same hardware ... you just inherit from the same base image ... and conserve traffic / space / build time =)

No sudo / venv required =)
SSH hopping from Windows?

Yeah, finally I found a recipe.
I always was one flag away from it.
Rsync at your pleasure and have your key only on your laptop!

You just need to:

(0) Use PuTTY / PuTTT-gen to create your ssh key (note that putty format and open-ssh format are different!)
(1) (or just import your open-ssh key into PuTTY-gen if you have it)
(2) Add your private key to pageant (PuTTY authentication agent)
(3) Do not forget to check Allow agent forwarding flag in PuTTY under Connection => SSH => Auth
(4) SSH into your server
(5) Go to /etc/ssh/ssh_config
(6) Uncomment and change the ForwardAgent yes line

Now you can rsync as much as you want.
Also inside of tmux.

#linux
Assembling a NAS for less than US$50

So ... you want a NAS for emergency backups that only you know about.

You have spent money on GPUs, drives, devboxes and you would like to get your NAS for free.
Ofc, if you are a clever boi, you will have RAID arrays on your devbox, offsite backups, etc etc

If you feel particularly S&M, you might even use AWS Glacier or smth similar.
Or you may buy a NAS (decent devices start from US$500-1000 w/o drives! rip-off!)


But you see, all of the above variants cost money.
Or you cannot easily throw such a backup out of the window / encryption creates overhead.

So you can create a NAS on the cheap in style:
- Buy any raspberry pi (US$5 - US$20, you can find one used even cheaper);
- Buy a USB HDD enclosure (US$5 - US$40);
- Find some garbage drives for free;
- Copy your files, put HDD under your pillow;
- Profit;

Added bonuses:
- If you live in a police state - you can use RAID 0 (just hide the second drive) => in essence this is like have a perfect one-time pad encryption;
- Easily use RAID 1 or RAID 10 with 4 drives;
- Very high portability, if you use 2.5'' drives;
- Mdadm arrays are easily transferrable;
- Cyber punk vibe;

#hardware
also people recommend backblaze for ultra cheap "fast" backup storage
it also has rsync
Current state of TF vs PyTorch


This review kind of is nothing new, but if you are new to the market, here is my TLDR:

- In reseach PyTorch >> TF, except for obscure cases;
- For small teams PyTorch >> TF;
- For fast product delivery and iteration PyTorch >> TF;
- For corporations TF > PyTorch;
- For edge computing / mobile now TF > PyTorch;
- For production in general, soon PyTorch ~ TF;

- The research community will not likely switch from PyTorch to TF 2.0;
- The remaining question now - will the large corporations / captive audiences switch to TF 2.0 from 1.0 or to PyTorch;

#deep_learning
Playing with name NER

Premise

So, I needed to separate street names that are actual name + surname. Do not ask me why.
Yeah I know that maybe 70% of streets are human names more or less.
So you need 99% precision and at least 30-40% recall.
Or you can imagine a creepy soviet name like Трактор.

So, today making a NER parser is easy, take out our favourite framework (plan PyTorch ofc) of choice.
Even use FastText or something even less true. Add data and boom you have it.

The pain

But not so fast. Turns our there is a reason why cutting out proper names is a pain.
For Russian there is the natasha library, but since it works on YARGY, it has some assumptions about data structure.
I.e. names should be capitalized, come in pairs (name - surname), etc etc - I did not look their rules under the hood, but I would write it like this.

So probably this would be a name - Иван Иванов
But this probably would not ванечка иванофф

Is it bad?
Ofc no, it just assumes some stuff that may not hold for your dataset.
And yeah it works for streets just fine.

Also recognizing a proper name without context does not really work. And good luck finding (or generating) corpora for that.

Why deep learning may not work

So I downloaded some free databases with names (VK.com respects your secutity lol - the 100M leaked database is available, but useless, too much noise) and surnames.
Got 700k surnames of different origin, around 100-200k male and female names. Used just random words from CC + wiki + taiga for hard negative mining.
Got 92% accuracy on 4 classes (just word, female name, male name, surname) with some naive models.

... and it works .... kind of. If you give it 10M unique word forms, it can distinguish name-like stuff in 90% of cases.
But for addresses it is useless more or less and heuristics from natasha work much better.

The moral

- A tool that works on one case may be 90% useless on another;
- Heuristics have very high precision, low recall and are fragile;
- Neural networks are superior, but you should match your artifically created dataset to the real data (it may take a month to pull off properly);
- In any case, properly cracking both approaches may take time, but both heuristics and NNs are very fast to create, but sometimes 3 plain rules give you 100% precision with 10% recall and sometimes generating a fake dataset that matches your domain is a no-brainer. It depends.

#data_science
#nlp
#deep_learning
GANs ⬆️
Tensorboard logging in PyTorch

Looked at this module some time ago. Looks like it matured now.
The coolest current feature - param logging.

Just compare these two docs:
- TensorboardX
- torch.utils

Looks like PyTorch just imported the most popular libarary, copying their docs and APIs.
Nice!

#deep_learning
2019 DS / ML digest 17

Link

Highlights of the week(s):

- BERT miniaturization?
- PyTorch domination?
- MobileNet from Facebook - FbNet

#digest
#deep_learning
The current state of "DIY" ML hardware

(i.e. that you can actually assemble and maintain and use in a small team)

Wanted to write a large post, but decided to just a TLDR.
In case you need a super-computer / cluster / devbox with 4 - 16 GPUs.

The bad
- Nvidia DGX and similar - 3-5x overpriced (sic!)
- Cloud providers (Amazon) - 2-3x overpriced

The ugly
- Supermicro GPU server solutions. This server hardware is a bit overpriced, but its biggest problem is old processor sockets
- Custom shop buit machines (with water) - very nice, but (except for water) you just pay US$5 - 10 - 15k for work you can do yourself in one day
- 2 CPU professional level motherboards - very cool, but powerful Intel Xeons are also very overpriced

The good
- Powerful AMD processor with 12-32 cores + top tier motherboard. This will support 4 GPUs on x8 speed and have a 10 Gb/s ethernet port
- Just add more servers with 10 Gb/s connection and probably later connect them into a ring ... cheap / powerful / easy to maintain

More democratization soon?

Probably the following technologies will untie our hands

- Single slot GPUs - Zotac clearly thought about it, maybe it will become mainstream in the professional market
- PCIE 4.0 => enough speed for ML even on cheaper motherboards
- New motherboards for AMD processors => maybe more PCIE slots will become normal
- Intel optane persistent memory => slow and expensive now, maybe RAM / SSD will merge (imagine having 2 TB of cheap RAM on your box)

Good chat in ODS on same topic.

#hardware
Open STT v1.0 release

Finally we released open STT v1.0 =)

Highlights

- 20 000 hours of annotated data
- 2 new large and diverse domains
- 12k speakers (to be released soon)
- Overall quality improvement
- See below posts and releases for more details

+---------------+------+--------+------+
| Domain | Utts | Hours | GB |
+---------------+------+--------+------+
| Radio | 8,3М | 11,996 | 1367 |
+---------------+------+--------+------+
| Public Speech | 1,7M | 2,709 | 301 |
+---------------+------+--------+------+
| Youtube | 2,6М | 2,117 | 346 |
+---------------+------+--------+------+
| Books | 1,3М | 1,632 | 180 |
+---------------+------+--------+------+
| Calls | 695K | 819 | 91 |
+---------------+------+--------+------+
| Other | 1.9M | 835 | 95 |
+---------------+------+--------+------+


How can I help?
- Share our dataset
- Share / publish your dataset - the more domains the better
- Upvote on habr
- Upvote on TDS (when released)
- We have an Open Collective page for donations

Links
- Open STT https://github.com/snakers4/open_stt
- Release https://github.com/snakers4/open_stt/releases
- Open TTS https://github.com/snakers4/open_tts
- Habr https://habr.com/ru/post/474462/
- Towards Data Science (coming soon)
- Bloghttps://spark-in.me/post/open-stt-release-v10
- Open collective https://opencollective.com/open_stt (edited)
Forwarded from Just links
I reimplemented this code in pure pytorch, and reproduces their results. It also gave decent results on ImageNet in only 5 epochs.
https://github.com/Randl/Ranger_Mish_reimplementation
Стажировка по работе с речью

Ищем увлечённых людей, скорее всего студентов 2 или 3 курса, кто хотел бы развиваться в направлении по работе с речью и ML в целом.

Работать можем начать хоть вчера, ограничений вообще никаких нет.
Планируем встречаться лично 1-2 раза в неделю.

Особо не подразумевается, что вы должны прямо что-то уметь, скорее мы рассчитываем найти людей:

- Со знанием английского (читать статьи, писать статьи, вести переписку и логи, говорить не нужно)
- Умных, целеустремленных, идейных
- С минимальной математической подготовкой
- Всему нужному мы научим. Или ты нас чему-то научишь

Будет плюсом:

- Python + PyTorch
- Любые другие DL фреймворки это хорошо, но юзать их не будем
- Ты бегло прочитал(а) seminal papers в какой-то области (CV, NLP, ASR) и у тебя есть свое мнение (отличное от "стакать трансформеры")
- Если ты запилил(а) вообще проект в любой сфере, где видно, что тащил(а) именно ты
- Если ты хочешь научиться решать или умеешь решать реальные задачи
- Если ты сделал(а) или хочешь сделать что-то осознанное в сфере ML
- Ты прошарен(а) в экосистеме Linux, не боишься работать в консоли

Что не нужно

- Заниматься чем-то ради того, чтобы заниматься
- Работать в нашей компании большая честь (tm)
- Кодить у доски, инвертировать деревья, перемножать большие числа в уме, вставить любое подобное

Зачем тебе это надо

- Если у тебя есть какие-то идеи в этой сфере, то мы можем дать платформу чтобы их качественно реализовать
- Когда у нас появится +1 место на фулл-тайм работу, угадай кто будет в шорт-листе
- Мы реально двигаем ML / решаем прикладные задачи, а не просто мараем бумагу / пилим бабос / собираем хайп
- Публикации, решение реальных задач, очень быстрый набор опыта
- Самым ярким кандидатам будем готовы отсыпать фантиков

Контакты

- Присылай в любом формате свои достижения, единственное пожелание - будь лаконичным
- Писать мне в телеграм напрямую - @snakers41

Ссылки на наши работы и публикации

- https://github.com/snakers4/open_stt
- https://medium.com/@aveysov
- https://spark-in.me/