Community Day @ MLSS 2019
MLSS Community Day is a free one-day event for everyone interested in Machine Learning.
Speakers from premier institutions in Machine Learning such as the University of Oxford, University College London, Max Planck Institute as well as renowned companies will cover the latest advances in applications for healthcare, telecommunications, NLP, finance, and quantum computing.
When & Where: August 31, Skoltech, Moscow
Link: https://mlss2019.skoltech.ru/community-day
#MLSS #MLSS2019 #Skolkovo
MLSS Community Day is a free one-day event for everyone interested in Machine Learning.
Speakers from premier institutions in Machine Learning such as the University of Oxford, University College London, Max Planck Institute as well as renowned companies will cover the latest advances in applications for healthcare, telecommunications, NLP, finance, and quantum computing.
When & Where: August 31, Skoltech, Moscow
Link: https://mlss2019.skoltech.ru/community-day
#MLSS #MLSS2019 #Skolkovo
smiles.skoltech.ru
MLSS Community Day
The Machine Learning Summer School will take place between the 26th of August and 6th of September, 2019 at Skoltech in Moscow, Russia. Join us to learn from world-renowned machine learning specialists, network with a formidable audience, and enjoy Moscow!
DeepMind's Behaviour Suite for Reinforcement Learning
DeepMind released Behaviour Suite for Reinforcement Learning, or ‘bsuite’ – a collection of carefully-designed experiments that investigate core capabilities of RL agents.
bsuite was built to do two things:
1. Offer clear, informative, and scalable experiments that capture key issues in RL
2. Study agent behaviour through performance on shared benchmarks
GitHub: https://github.com/deepmind/bsuite
Paper: https://arxiv.org/abs/1908.03568v1
Google colab: https://colab.research.google.com/drive/1rU20zJ281sZuMD1DHbsODFr1DbASL0RH
#RL #DeepMind #Bsuite
DeepMind released Behaviour Suite for Reinforcement Learning, or ‘bsuite’ – a collection of carefully-designed experiments that investigate core capabilities of RL agents.
bsuite was built to do two things:
1. Offer clear, informative, and scalable experiments that capture key issues in RL
2. Study agent behaviour through performance on shared benchmarks
GitHub: https://github.com/deepmind/bsuite
Paper: https://arxiv.org/abs/1908.03568v1
Google colab: https://colab.research.google.com/drive/1rU20zJ281sZuMD1DHbsODFr1DbASL0RH
#RL #DeepMind #Bsuite
GitHub
GitHub - deepmind/bsuite: bsuite is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement…
bsuite is a collection of carefully-designed experiments that investigate core capabilities of a reinforcement learning (RL) agent - GitHub - deepmind/bsuite: bsuite is a collection of carefully-de...
Neural Text d̶e̶Generation with Unlikelihood Training
Introducing a new objective, unlikelihood training, which forces unlikely generations to be assigned lower probability by the model, which improves overall quality of generated text.
Link: https://arxiv.org/pdf/1908.04319.pdf
#NLU #NLP #textgeneration
Introducing a new objective, unlikelihood training, which forces unlikely generations to be assigned lower probability by the model, which improves overall quality of generated text.
Link: https://arxiv.org/pdf/1908.04319.pdf
#NLU #NLP #textgeneration
ODS breakfast in Paris! See you this Saturday at 10:30 at Malongo Café, 50 Rue Saint-André des Arts.
🥇Parameter optimization in neural networks.
Play with three interactive visualizations and develop your intuition for optimizing model parameters.
Link: https://www.deeplearning.ai/ai-notes/optimization/
#interactive #demo #optimization #parameteroptimization #novice #entrylevel #beginner #goldcontent #nn #neuralnetwork
Play with three interactive visualizations and develop your intuition for optimizing model parameters.
Link: https://www.deeplearning.ai/ai-notes/optimization/
#interactive #demo #optimization #parameteroptimization #novice #entrylevel #beginner #goldcontent #nn #neuralnetwork
The HSIC Bottleneck: Deep Learning without Back-Propagation
An alternative to conventional backpropagation, that has a number of distinct advantages.
Link: https://arxiv.org/abs/1908.01580
#nn #backpropagation #DL #theory
An alternative to conventional backpropagation, that has a number of distinct advantages.
Link: https://arxiv.org/abs/1908.01580
#nn #backpropagation #DL #theory
arXiv.org
The HSIC Bottleneck: Deep Learning without Back-Propagation
We introduce the HSIC (Hilbert-Schmidt independence criterion) bottleneck for training deep neural networks. The HSIC bottleneck is an alternative to the conventional cross-entropy loss and...
If you happen to be in Moscow in the next couple of weeks, we invite you to take part in Moscow Data Science Major on August 31st at Mail.ru Group office!
It’s like OpenDataScience’s Data Fest, but a mini version (in terms of duration, not content density). It’s like 1st of October, but 31st of August.
MDSM gather all researchers, engineers and developers around Data Science and Machine Learning:
- Top speakers and talks, zero bullshit
- Lots of new insights, skills and know-hows
- Best networking with the community
Link: https://datafest.ru/major/
Registration link: https://corp.mail.ru/ru/press/events/mdsm_aug19/
It’s like OpenDataScience’s Data Fest, but a mini version (in terms of duration, not content density). It’s like 1st of October, but 31st of August.
MDSM gather all researchers, engineers and developers around Data Science and Machine Learning:
- Top speakers and talks, zero bullshit
- Lots of new insights, skills and know-hows
- Best networking with the community
Link: https://datafest.ru/major/
Registration link: https://corp.mail.ru/ru/press/events/mdsm_aug19/
Здесь говорят о трафике
Как привлечь, приумножить и хорошо зарабатывать на трафике
GPT-2: 6-Month Follow-Up
#OpenAI released the 774 million parameter #GPT2 language model.
Link: https://openai.com/blog/gpt-2-6-month-follow-up/
#NLU #NLP
#OpenAI released the 774 million parameter #GPT2 language model.
Link: https://openai.com/blog/gpt-2-6-month-follow-up/
#NLU #NLP
Openai
GPT-2: 6-month follow-up
We’re releasing the 774 million parameter GPT-2 language model after the release of our small 124M model in February, staged release of our medium 355M model in May, and subsequent research with partners and the AI community into the model’s potential for…
Applying machine learning optimization methods to the production of a quantum gas
#DeepMind developed machine learning techniques to optimise the production of a Bose-Einstein condensate, a quantum-mechanical state of matter that can be used to test predictions of theories of many-body physics.
ArXiV: https://arxiv.org/abs/1908.08495
#Physics #DL #BEC
#DeepMind developed machine learning techniques to optimise the production of a Bose-Einstein condensate, a quantum-mechanical state of matter that can be used to test predictions of theories of many-body physics.
ArXiV: https://arxiv.org/abs/1908.08495
#Physics #DL #BEC
Testing Robustness Against Unforeseen Adversaries
OpenAI developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training. The method yields a new metric, #UAR (Unforeseen Attack Robustness), which evaluates the robustness of a single model against an unanticipated attack, and highlights the need to measure performance across a more diverse range of unforeseen attacks.
Link: https://openai.com/blog/testing-robustness/
ArXiV: https://arxiv.org/abs/1908.08016
Code: https://github.com/ddkang/advex-uar
#GAN #Adversarial #OpenAI
OpenAI developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training. The method yields a new metric, #UAR (Unforeseen Attack Robustness), which evaluates the robustness of a single model against an unanticipated attack, and highlights the need to measure performance across a more diverse range of unforeseen attacks.
Link: https://openai.com/blog/testing-robustness/
ArXiV: https://arxiv.org/abs/1908.08016
Code: https://github.com/ddkang/advex-uar
#GAN #Adversarial #OpenAI
OpenGPT-2: We Replicated GPT-2 Because You Can Too
Article about replication of famous #GPT2. This replication project trained a 1.5B parameter «OpenGPT-2» model on OpenWebTextCorpus, a 38GB dataset similar to the original, and showed comparable results to original GPT-2 on various benchmarks.
Link: https://medium.com/@vanya_cohen/opengpt-2-we-replicated-gpt-2-because-you-can-too-45e34e6d36dc
Google colab: https://colab.research.google.com/drive/1esbpDOorf7DQJV8GXWON24c-EQrSKOit
OpenWebCorpus: https://skylion007.github.io/OpenWebTextCorpus/
#NLU #NLP
Article about replication of famous #GPT2. This replication project trained a 1.5B parameter «OpenGPT-2» model on OpenWebTextCorpus, a 38GB dataset similar to the original, and showed comparable results to original GPT-2 on various benchmarks.
Link: https://medium.com/@vanya_cohen/opengpt-2-we-replicated-gpt-2-because-you-can-too-45e34e6d36dc
Google colab: https://colab.research.google.com/drive/1esbpDOorf7DQJV8GXWON24c-EQrSKOit
OpenWebCorpus: https://skylion007.github.io/OpenWebTextCorpus/
#NLU #NLP
Medium
OpenGPT-2: We Replicated GPT-2 Because You Can Too
By Aaron Gokaslan* and Vanya Cohen*
Open-sourcing hyperparameter autotuning for fastText
Facebook AI researchers are releasing a new feature for the fastText library which provides hyper-parameter autotuning for more efficient text classifiers.
Link: https://ai.facebook.com/blog/fasttext-blog-post-open-source-in-brief/
#FacebookAI #Facebook #FastText #NLU #NLP
Facebook AI researchers are releasing a new feature for the fastText library which provides hyper-parameter autotuning for more efficient text classifiers.
Link: https://ai.facebook.com/blog/fasttext-blog-post-open-source-in-brief/
#FacebookAI #Facebook #FastText #NLU #NLP
Meta
Open-sourcing hyperparameter autotuning for fastText
Facebook AI researchers are releasing a new feature for the fastText library that provides hyperparameter autotuning for more efficient text classifiers.
Data Science by ODS.ai 🦜
Open-sourcing hyperparameter autotuning for fastText Facebook AI researchers are releasing a new feature for the fastText library which provides hyper-parameter autotuning for more efficient text classifiers. Link: https://ai.facebook.com/blog/fasttext-blog…
This media is not supported in your browser
VIEW IN TELEGRAM
The infinite gift
is an interesting object where the side of the nth box is 1/√n. As n→+∞, the gift has infinite surface area and length but finite volume!
is an interesting object where the side of the nth box is 1/√n. As n→+∞, the gift has infinite surface area and length but finite volume!
Exploring Weight Agnostic Neural Networks
Exploration of agents that can already perform well in their environment without the need to learn weight parameters.
Link: https://ai.googleblog.com
Code: https://github.com/google/brain-tokyo-workshop/tree/master/WANNRelease
Exploration of agents that can already perform well in their environment without the need to learn weight parameters.
Link: https://ai.googleblog.com
Code: https://github.com/google/brain-tokyo-workshop/tree/master/WANNRelease
Neural net to enhance old or low-quality video to HD (TS -> HD).
It is so surprising that noone had yet released a model for that. People have lots of old video recordings, which will definately benefit from quality enhancement. And we all have to hope movie pirates won’t use it to enhance stolen copies.
Link: https://news.developer.nvidia.com/researchers-at-videogorillas-use-ai-to-remaster-archived-content-to-4k-resolution-and-above/
More demos: https://videogorillas.com/bigfoot/
#SuperResolution #CV #DL
It is so surprising that noone had yet released a model for that. People have lots of old video recordings, which will definately benefit from quality enhancement. And we all have to hope movie pirates won’t use it to enhance stolen copies.
Link: https://news.developer.nvidia.com/researchers-at-videogorillas-use-ai-to-remaster-archived-content-to-4k-resolution-and-above/
More demos: https://videogorillas.com/bigfoot/
#SuperResolution #CV #DL
ODS breakfast in Paris! See you this Saturday at 10:30 at Malongo Café, 50 Rue Saint-André des Arts.
Forwarded from Just links
http://rescience.github.io/
Tl;dr:
Reproducibility is important. Publishing a paper which results can't be used by any reader is more or less useless. However, while everybody talks about reproducibility, but nobody accepts papers about reproduction of the existing research for publication, let alone the fact of publishing non-reproducible research (not enough details, no open dataset, etc.), which is OK sometimes, but usually is not.
Moreover, what people usually mean when they say "reproducibility" (possibility of repeating the exact experiment described in paper and achieving same results) is "replicability" (possibility of conducting similar experiments with similar results).
This journal aims to be an open access and open source platform to publish replication computational research (which is easier to both replicate and verify).
Tl;dr:
Reproducibility is important. Publishing a paper which results can't be used by any reader is more or less useless. However, while everybody talks about reproducibility, but nobody accepts papers about reproduction of the existing research for publication, let alone the fact of publishing non-reproducible research (not enough details, no open dataset, etc.), which is OK sometimes, but usually is not.
Moreover, what people usually mean when they say "reproducibility" (possibility of repeating the exact experiment described in paper and achieving same results) is "replicability" (possibility of conducting similar experiments with similar results).
This journal aims to be an open access and open source platform to publish replication computational research (which is easier to both replicate and verify).
rescience.github.io
ReScience C
Reproducible Science is good.
Replicated Science is better.
Replicated Science is better.
🚨😭STOP talking bad about different Data SPECIALTIES😭🚨
Data Science is EXCITING
Frequentist Statistics is RELIABLE
Software Engineering is CRUCIAL
Bayesian Statistics
Machine Learning is POWERFUL
Data Science is EXCITING
Frequentist Statistics is RELIABLE
Software Engineering is CRUCIAL
Bayesian Statistics
Machine Learning is POWERFUL
New fastMRI challenge from #FacebookAI team
Submission deadline: September 19
Announcement link: https://ai.facebook.com/blog/fastmri-challenge/
Competition link: https://fastmri.org/
#Competition #NotOnlyKaggle #Facebook #CV #DL
Submission deadline: September 19
Announcement link: https://ai.facebook.com/blog/fastmri-challenge/
Competition link: https://fastmri.org/
#Competition #NotOnlyKaggle #Facebook #CV #DL
Nice article on non-official jupyter notebook extensions
Warning: there is a checkbox, saying «disable configuration for nbextensions without explicit compatibility (they may break your notebook environment, but can be useful to show for nbextension development)». So it is better to test the extensions in separate environment.
And correct way to install is extension support is:
Link: https://towardsdatascience.com/setting-up-a-data-science-environment-using-windows-subsystem-for-linux-wsl-c4b390803dd
#jupyter #tipsandtrics
Warning: there is a checkbox, saying «disable configuration for nbextensions without explicit compatibility (they may break your notebook environment, but can be useful to show for nbextension development)». So it is better to test the extensions in separate environment.
And correct way to install is extension support is:
pip install jupyter_contrib_nbextensions && jupyter contrib nbextension install --user
Link: https://towardsdatascience.com/setting-up-a-data-science-environment-using-windows-subsystem-for-linux-wsl-c4b390803dd
#jupyter #tipsandtrics
Medium
Setting up a Data Science environment using Windows Subsystem for Linux (WSL) and Jupyter
A Python environment in Linux on Windows, full admin rights & custom Jupyter Notebooks.. Sounds good? Give this a read!