Data Science by ODS.ai 🦜
51K subscribers
363 photos
34 videos
7 files
1.52K links
First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @haarrp
Download Telegram
​​Voila turns Jupyter notebooks into standalone web applications.

Unlike the usual HTML converted notebooks, each user connecting to the Voila tornado application gets a dedicated Jupyter kernel which can execute the callbacks to changes in Jupyter interactive widgets.

- By default, Voila disallows execute requests from the front-end, preventing execution of arbitrary code.
- By default, Voila runs with the strip_source option, which strips out the input cells from the rendered notebook.

https://github.com/voila-dashboards/voila

#python
The Reformer – Pushing the limits of language modeling
Patrick von Platen @ huggingface

The Reformer model was introduced by Kitaev, Kaiser et al. `20 – it is one of the most memory-efficient transformer models for long sequence modeling as of today.

The goal of this blog post is to give an in-depth understanding of each of the next four Reformer features:
[0] reformer self-attention layer – how to efficiently implement self-attention without being restricted to a local context?
[1] chunked feed forward layers – how to get a better time-memory trade-off for large feed forward layers?
[2] reversible residual layers – how to drastically reduce memory consumption in training by a smart residual architecture?
[3] axial positional encodings – how to make positional encodings usable for extremely large input sequences?

This long blog post can better allow you to understand how the model works to correctly set configurations


blog post: https://huggingface.co/blog/reformer

#nlp #reformer #huggingface #transformers
​​Do Adversarially Robust ImageNet Models Transfer Better?

TLDR - Yes.

Authors decide to check will adversarial trained network performed better on transfer learning tasks despite on worst accuracy on the trained dataset (ImageNet of course). And it is true.

They tested this idea on a frozen pre-trained feature extractor and trained only linear classifier that outperformed classic counterpart. And they tested on a full unfrozen fine-tuned network, that outperformed too on transfer learning tasks.

On pre-train task they use the adversarial robustness prior, that refers to a model’s invariance to small (often imperceptible) perturbations of its inputs.

They show also that such an approach gives better future representation properties of the networks.

They did many experiments (14 pages of graphics) and an ablation study.


paper: https://arxiv.org/abs/2007.08489
code: https://github.com/Microsoft/robust-models-transfer

#transfer_learning #SOTA #adversarial
​​how gpt3 works. a visual thread

short thread with cool animations how gpt-3 works by jay alammar

collected twitter thread: https://threader.app/thread/1285498971960598529


#nlp #transformers #gpt3 #jayalammar
GPT3 right now
Anonymous Poll
68%
Overhyped
32%
Undervalued
#GPT3 attracted lots of attention. Let’s try new format of discussing the matter in the comments, provided by peerboard.

For accessing the comments, just click the link below ⬇️⬇️⬇️, authorize with the telegram and follow the discussion.
Data Science by ODS.ai 🦜 pinned «​​Ultimate post on where to start learning DS Most common request we received through the years was to share insights and advices on how to start career in data science and to recommend decent cources. Apparently, using hashtag #wheretostart wasn't enough…»
This media is not supported in your browser
VIEW IN TELEGRAM
Applying GPT-3 to generate neural network code

Matt Shumer used GPT-3 to generate code for a machine learning model, just by describing the dataset and required output.

#GPT3 #inception #codegeneration #NLU #NLP
​​Astrologers proclaimed the week of #codegeneration. Number of articles about the subject doubled.
Deep learning to translate between programming languages

#FacebookAI released TransCoder, an entirely self-supervised neural transcompiler system that is claimed to make code migration easier and more efficient.

ArXiV: https://arxiv.org/pdf/2006.03511.pdf
Github: https://github.com/facebookresearch/TransCoder/

#NLU #codegeneration #NLP
​​Funnel Activation for Visual Recognition

Authors offer a new activation function for image recognition tasks, called Funnel activation (FReLU), that extends ReLU and PReLU to a 2D activation by adding a negligible overhead of spatial condition.

Extensive experiments on COCO, ImageNet and CityScape show significant improvement and robustness.


Paper: https://arxiv.org/abs/2007.11824
Code: https://github.com/megvii-model/FunnelAct

#deeplearning #activationfunction #computervision #pytorch
Our friends from @loss_function_porn released their app and climbed AppStore top chart!

Let’s help them preserve that position by downloading an app and giving them 5⭐️.
Forwarded from Karim Iskakov - канал (Vladimir Ivashkin)
This media is not supported in your browser
VIEW IN TELEGRAM
BREAKING NEWS! (sound on)

Our iOS app Avatarify is #1 in Russian App Store, and today we release it worldwide.

Vivify any photo with your face in real time: celebrity, your boss or even pet. Record video and share it to amaze your friends.

NN works completely on the device in zero-shot mode. Check it out!

📱 App Store
🌐 avatarify.ai
📉 @loss_function_porn
​​Stanford updated tool Stanza with #NER for biomedical and clinical terms

Stanza extended with first domain-specific models for biomedical and clinical medical English. They range from approaching to significantly improving state of the art results on syntactic and NER tasks.

That means that now neural networks are capable of understanding difficult texts with lots of specific terms. That means better search, improved knowledge extraction and approach for performing META analysis, or even research with medical ArXiV publications.

Demo: http://stanza.run/bio
ArXiV: https://arxiv.org/abs/2007.14640

#NLProc #NLU #Stanford #biolearning #medicallearning
​​Hope that someday DL industry will evolve enough to develop tools for recognizing russian doctors’ handwriting.
english to regex

generating regex by just describing it and providing an example (apparently powered by gpt-3)


web page: https://losslesshq.com

#regext #gpt3