Spark in me
2.21K subscribers
822 photos
48 videos
116 files
2.68K links
Lost like tears in rain. DS, ML, a bit of philosophy and math. No bs or ads.
Download Telegram
New in our Open STT dataset

https://github.com/snakers4/open_stt#updates

- An mp3 version of the dataset;
- A torrent for mp3 dataset;
- A torrent for the original wav dataset;
- Benchmarks on the public dataset / files with "poor" annotation marked;

#deep_learning
#data_science
#dataset
New version of our open STT dataset - 0.5, now in beta

Please share and repost!

https://github.com/snakers4/open_stt/releases/tag/v0.5-beta

What is new?
- A new domain - radio (1000+ new hours);
- A larger YouTube dataset with 1000+ additional hours;
- A small (300 hours) YouTube dataset downloaded in maximum quality;
- Ground truth validation sets for YouTube / books / public calls manually annotated;
- Now we will start to focus on actually cleaning and distilling the dataset. We have published a second list of "bad" data;

I'm back from vacation)

#deep_learning
#data_science
#dataset
Trying to migrate to JupyterLab from Jupyter Notebook?

Some time ago I noticed that the Jupyter extensions project was more or less frozen => JupyterLab obviously is trying to shift community attention to npm / nodejs plugins.

Again (like 6-12 months ago) I tried to do this.

This time Lab is more mature:
- Now at version >1;
- Now they have built-in package manager;
- They have some of the most necessary extensions (i.e. git, toc, google drive, etc);
- UI got polished a bit, but window in a window still produces a bit of mental friction. Only the most popular file formats are supported. Text editor inherited the best features, but it is still a bit rudimentary;
- Full screen width by default;
- Some useful things (like codefolding) are now turned on in settings json file;
- Using these extensions is a bit of a chore in edge cases (i.e. some user permission problems / you have to re-build an app each time you add an extensions);

But I could not switch mostly for one reason - this one
- https://github.com/jupyterlab/jupyterlab/issues/2275#issuecomment-498323475

If you have a Jupyter environment it is very easy to switch. For me, before it was:
# 5.6 because otherwise I have a bug with installing extensions
RUN conda install notebook=5.6

RUN pip install git+https://github.com/ipython-contrib/jupyter_contrib_nbextensions && \
jupyter contrib nbextension install --user

CMD jupyter notebook --port=8888 --ip=0.0.0.0 --no-browser

And it just became:
RUN conda install -c conda-forge jupyterlab

CMD jupyter lab --port=8888 --ip=0.0.0.0 --no-browser


#data_science
Full IDE in a browser?

Almost)

You all know all the pros and cons of:
- IDEs (PyCharm);
- Advanced text editors (Atom, Sublime Text);
- Interactive environments (notebook / lab, Atom + Hydrogen);

I personally dislike local IDEs - not because connecting to a remote / remote kernel / remote interpreter is a bit of a chore. Setting up is easy, but always thinking about what is synced and what is not - is just pain. Also when your daily driver machine is on Windows, using Linux subsystem all the time with Windows paths is just pain. (Also I diskile bulky interfaces, but this is just a habit and it depends).

But what if I told you there is a third option? =)
If you work as a team on a remote machine / set of machines?

TLDR - you can run a modern web "IDE" (it is something between Atom and real IDE - less bulky, but less functions) in a browser.
Now you can just run it with one command.

Pros:
- It is open source (though shipped as a part of some enterprise packages like Eclipse Che);
- Pre-built images available;
- It is extendible - new modules get released - you can build yourself or just find a build;
- It has extensive linting, python language server (just a standard library though);
- It has full text search ... kind of;
- Follow definition in your code;
- Docstrings and auto-complete work for your modules and standard library (not for you packages);

Looks cool af!
If they ship a build with a remote python kernel, then it will be a perfect option for teams!

I hope it will not follow a path taken by another crowd favourite similar web editor (it was purhcased by Amazon).

Links
- Website;
- Pre-built apps for python;
- Language server they are using;

#data_science
An ideal remote IDE?

Joking?
No, looks like VScode recently got its remote development extensions (it was only in insiders build a couple of months ago) working just right.

I tried remote-ssh extension and it looks quite polished. No syncing your large data folders and loading all python dependencies locally for hours.

The problem? It took me an hour just to open ssh session under Windows properly (permissions and Linux folder path substitution is hell on Windows). When I opened it - it worked like a charm.

So for now (it is personal) - best tools are in my opinion:
- Notebooks - for exploration and testing;
- VScode for codebase;
- Atom - for local scripts;

#data_science
Managing your DS / ML environment neatly and in style

If you have a sophisticated environment that you need to do DS / ML / DL, then using a set of Docker images may be a good idea.
You can also tap into a vast community of amazing and well-maintained Dockerhub repositories (e.g. nvidia, pytorch).

But what you have to do this for several people? And use it with a proper IDE via ssh?
A well-known features of Docker include copy on write and user "forwarding". If you approach naively, each user will store his own images, which take quite some space.
And also you have to make your ssh daemon works inside of a container as a second service.

So I solved these "challenges" and created 2 public layers so far:
- Basic DS / ML layer - FROM aveysov/ml_images:layer-0 - from dockerfile;
- DS / ML libraries - FROM aveysov/ml_images:layer-0- from dockerfile;

Your final dockerfile may look something like this just pulling from any of those layers.
Note that when building this, you will need to pass your UID as a variable, e.g.:

docker build --build-arg NB_UID=1000 -t av_final_layer -f Layer_final.dockerfile .


When launched, this launched a notebook with extensions. You can just exec into the machine itself to run scripts or use an ssh daemon inside (do not forget to add your ssh key and service ssh start).


#deep_learning
#data_science
Extreme NLP network miniaturization

Tried some plain RNNs on a custom in the wild NER task.
The dataset is huge - literally infinite, but manually generated to mimick in-the-wild data.

I use EmbeddingBag + 1m n-grams (an optimal cut-off). Yeah, on NER / classification it is a handy trick that makes your pipeline totally misprint / error / OOV agnostic. Also FAIR themselves just guessed this too. Very cool! Just add PyTorch and you are golden.

What is interesting:
- Model works with embedding sizes 300, 100, 50 and even 5! 5 is dangerously close to OHE, but doing OHE on 1m n-grams kind-of does not make sense;
- Model works with various hidden sizes
- Naturally all of the models run on CPU very fast, but the smallest model also is very light in terms of its weights;
- The only difference is - convergence time. It kind of scales as a log of model size, i.e. model with 5 takes 5-7x more time to converge compared to model with 50. I wonder what if I use embedding size of 1?;

As added bonus - you can just store such miniature model in git w/o lfs.
What is with training transformers on US$250k worth of compute credits you say?)

#nlp
#data_science
#deep_learning
My foray into the STT Dark Forest

My tongue-in-cheek article on ML in general, and how to make your STT model train 3-4x faster with 4-5x less weights with the same quality

https://spark-in.me/post/stt-dark-forest

#data_science
#deep_learning
#stt
Poor man's ensembling techniques

So you want to improve your model's performance a bit.
Ensembling helps. But as is ... it's useful only on Kaggle competitions, where people stack over9000 networks trained on 100MB of data.

But for real life usage / production, there exist ensembling techniques, that do not require significant computation cost increase (!).
All of this is not mainstream yet, but it may work on you dataset!
Especially if your task is easy and the dataset is small.

- SWA (proven to work, usually used as a last stage when training a model);
- Lookahead optimizer (kind of new, not thoroughly tested);
- Multi-Sample Dropout (seems like a cheap ensemble, should work for classification);

Applicability will vary with your task.
Plain vanilla classification can use all of these, s2s networks probably only partially.

#data_science
#deep_learning