🎥 Интерпретируемые модели машинного обучения и их представление бизнесу – Ирина Голощапова
👁 3 раз ⏳ 1495 сек.
👁 3 раз ⏳ 1495 сек.
Секция Interpretable ML – Black stage, 11 мая 2019
Презентации с Data Fest 6 – https://drive.google.com/open?id=1LOmOoh1WLqmhSqTKjvdOQx-YOTyBgG-i
Vk
Интерпретируемые модели машинного обучения и их представление бизнесу – Ирина Голощапова
Секция Interpretable ML – Black stage, 11 мая 2019
Презентации с Data Fest 6 – https://drive.google.com/open?id=1LOmOoh1WLqmhSqTKjvdOQx-YOTyBgG-i
Презентации с Data Fest 6 – https://drive.google.com/open?id=1LOmOoh1WLqmhSqTKjvdOQx-YOTyBgG-i
Deep learning with Python Develop Deep Learning models on Theano and Thensorflow using Keras
#book #keras #DL
📝 5_6133943928459624650.pdf - 💾5 709 397
#book #keras #DL
📝 5_6133943928459624650.pdf - 💾5 709 397
Deep Learning From Scratch - Seth Weidman | ODSC East 2019
🔗 Deep Learning From Scratch - Seth Weidman | ODSC East 2019
There are many good tutorials on neural networks out there. While some of them dive deep into the code and show how to implement things, and others explain what is going on via diagrams or math, very few bring all the concepts needed to understand neural networks together, showing diagrams, code, and math side by side. In this video, you will get a clear, step-by-step explanation of neural networks, implementing them from scratch in Numpy, while showing both diagrams that explain how they work and the math
🔗 Deep Learning From Scratch - Seth Weidman | ODSC East 2019
There are many good tutorials on neural networks out there. While some of them dive deep into the code and show how to implement things, and others explain what is going on via diagrams or math, very few bring all the concepts needed to understand neural networks together, showing diagrams, code, and math side by side. In this video, you will get a clear, step-by-step explanation of neural networks, implementing them from scratch in Numpy, while showing both diagrams that explain how they work and the math
YouTube
Deep Learning From Scratch - Seth Weidman | ODSC East 2019
There are many good tutorials on neural networks out there. While some of them dive deep into the code and show how to implement things, and others explain w...
Sparse Networks from Scratch: Faster Training without Losing Performance
🔗 Sparse Networks from Scratch: Faster Training without Losing Performance
We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving performance levels competitive with dense networks. We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms. Furthermore, we show that our algorithm can reliably find the equivalent of winning lottery tickets from random initialization: Our algorithm finds sparse configurations with 20% or fewer weights which perform as w
🔗 Sparse Networks from Scratch: Faster Training without Losing Performance
We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving performance levels competitive with dense networks. We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms. Furthermore, we show that our algorithm can reliably find the equivalent of winning lottery tickets from random initialization: Our algorithm finds sparse configurations with 20% or fewer weights which perform as w
arXiv.org
Sparse Networks from Scratch: Faster Training without Losing Performance
We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance...
What’s a Hall of Fame Quarterback Worth?
🔗 What’s a Hall of Fame Quarterback Worth?
Using With or Without analysis to quantify the the true on-field value of a Hall Of Fame caliber QB
🔗 What’s a Hall of Fame Quarterback Worth?
Using With or Without analysis to quantify the the true on-field value of a Hall Of Fame caliber QB
Medium
What’s a Hall of Fame Quarterback Worth?
Using With or Without analysis to quantify the the true on-field value of a Hall Of Fame caliber QB
Hypothesis testing for dummies - Towards Data Science
🔗 Hypothesis testing for dummies - Towards Data Science
Why even worry about hypothesis testing?
🔗 Hypothesis testing for dummies - Towards Data Science
Why even worry about hypothesis testing?
Medium
Hypothesis testing for dummies
Why even worry about hypothesis testing?
Making of: Trails of Wind - Towards Data Science
🔗 Making of: Trails of Wind - Towards Data Science
How we created a map of the global architecture of airport runways, which turned out to be a wind map.
🔗 Making of: Trails of Wind - Towards Data Science
How we created a map of the global architecture of airport runways, which turned out to be a wind map.
Medium
Making-of: Trails of Wind
How we created a map of the global architecture of airport runways, which turned out to be a wind map.
🎥 Getting Started with TensorFlow 2.0 | SciPy 2019 Tutorial | Josh Gordon
👁 1 раз ⏳ 7799 сек.
👁 1 раз ⏳ 7799 сек.
A hands-on introduction to training neural networks with TensorFlow 2.0. In this four hour tutorial, we will briefly introduce TensorFlow, then dive in to writing code. We will complete four exercises (classifying images, classifying text, training a GAN, etc). This tutorial is targeted at folks new to TensorFlow, and/or Deep Learning. Our goal is to help you get started efficiently and effectively, so you can continue learning on your own.
Tutorial information may be found at https://www.scipy2019.scipy.
Vk
Getting Started with TensorFlow 2.0 | SciPy 2019 Tutorial | Josh Gordon
A hands-on introduction to training neural networks with TensorFlow 2.0. In this four hour tutorial, we will briefly introduce TensorFlow, then dive in to writing code. We will complete four exercises (classifying images, classifying text, training a GAN…
🎥 Use Pre-Built Containers to Build Custom Deep Learning Models Quickly - AWS Online Tech Talks
👁 1 раз ⏳ 2066 сек.
👁 1 раз ⏳ 2066 сек.
AWS Deep Learning Containers (AWS DL Containers) are Docker images pre-installed with TensorFlow and Apache MXNet deep learning frameworks to make it easy to deploy custom machine learning environments quickly by letting you skip the complicated process of building and optimizing your environments from scratch. In this tech talk, learn about how to deploy AWS DL Containers on Amazon SageMaker, Amazon Elastic Container Service for Kubernetes (Amazon EKS), self-managed Kubernetes, Amazon Elastic Container Ser
Vk
Use Pre-Built Containers to Build Custom Deep Learning Models Quickly - AWS Online Tech Talks
AWS Deep Learning Containers (AWS DL Containers) are Docker images pre-installed with TensorFlow and Apache MXNet deep learning frameworks to make it easy to deploy custom machine learning environments quickly by letting you skip the complicated process of…
facebookresearch/pythia
🔗 facebookresearch/pythia
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR) - facebookresearch/pythia
🔗 facebookresearch/pythia
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR) - facebookresearch/pythia
GitHub
GitHub - facebookresearch/mmf: A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR) - GitHub - facebookresearch/mmf: A modular framework for vision & language multimodal ...
Trending deep learning Github repositories
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
Here's a list of top 100 deep learning Github trending repositories sorted by the number of stars gained on a specific day.
https://github.com/mbadry1/Trending-Deep-Learning
🔗 mbadry1/Trending-Deep-Learning
Top 100 trending deep learning repositories sorted by the number of stars gained on a specific day. - mbadry1/Trending-Deep-Learning
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
Here's a list of top 100 deep learning Github trending repositories sorted by the number of stars gained on a specific day.
https://github.com/mbadry1/Trending-Deep-Learning
🔗 mbadry1/Trending-Deep-Learning
Top 100 trending deep learning repositories sorted by the number of stars gained on a specific day. - mbadry1/Trending-Deep-Learning
GitHub
GitHub - mbadry1/Trending-Deep-Learning: Top 100 trending deep learning repositories sorted by the number of stars gained on a…
Top 100 trending deep learning repositories sorted by the number of stars gained on a specific day. - mbadry1/Trending-Deep-Learning
Evaluation of Retinal Image Quality Assessment Networks in Different Color-spaces
Authors: Huazhu Fu, Boyang Wang, Jianbing Shen, Shanshan Cui, Yanwu Xu, Jiang Liu, Ling Shao
Abstract: …by its large-scale size, multi-level grading, and multi-modality. Then, we analyze the influences on RIQA of different color-spaces, and propose a simple yet efficient deep network, named Multiple Color-space Fusion Network (MCF-Net)
https://arxiv.org/abs/1907.05345
🔗 Evaluation of Retinal Image Quality Assessment Networks in Different Color-spaces
Retinal image quality assessment (RIQA) is essential for controlling the quality of retinal imaging and guaranteeing the reliability of diagnoses by ophthalmologists or automated analysis systems. Existing RIQA methods focus on the RGB color-space and are developed based on small datasets with binary quality labels (i.e., `Accept' and `Reject'). In this paper, we first re-annotate an Eye-Quality (EyeQ) dataset with 28,792 retinal images from the EyePACS dataset, based on a three-level quality grading system (i.e., `Good', `Usable' and `Reject') for evaluating RIQA methods. Our RIQA dataset is characterized by its large-scale size, multi-level grading, and multi-modality. Then, we analyze the influences on RIQA of different color-spaces, and propose a simple yet efficient deep network, named Multiple Color-space Fusion Network (MCF-Net), which integrates the different color-space representations at both a feature-level and prediction-level to predict image quality grades. Experiments on our
Authors: Huazhu Fu, Boyang Wang, Jianbing Shen, Shanshan Cui, Yanwu Xu, Jiang Liu, Ling Shao
Abstract: …by its large-scale size, multi-level grading, and multi-modality. Then, we analyze the influences on RIQA of different color-spaces, and propose a simple yet efficient deep network, named Multiple Color-space Fusion Network (MCF-Net)
https://arxiv.org/abs/1907.05345
🔗 Evaluation of Retinal Image Quality Assessment Networks in Different Color-spaces
Retinal image quality assessment (RIQA) is essential for controlling the quality of retinal imaging and guaranteeing the reliability of diagnoses by ophthalmologists or automated analysis systems. Existing RIQA methods focus on the RGB color-space and are developed based on small datasets with binary quality labels (i.e., `Accept' and `Reject'). In this paper, we first re-annotate an Eye-Quality (EyeQ) dataset with 28,792 retinal images from the EyePACS dataset, based on a three-level quality grading system (i.e., `Good', `Usable' and `Reject') for evaluating RIQA methods. Our RIQA dataset is characterized by its large-scale size, multi-level grading, and multi-modality. Then, we analyze the influences on RIQA of different color-spaces, and propose a simple yet efficient deep network, named Multiple Color-space Fusion Network (MCF-Net), which integrates the different color-space representations at both a feature-level and prediction-level to predict image quality grades. Experiments on our
10 of the Best Tensorflow Courses to Learn Machine Learning from Coursera and Udemy
🔗 10 of the Best Tensorflow Courses to Learn Machine Learning from Coursera and Udemy
Learn Machine learning with Tensorflow from the best online courses and certifications from Coursera, Udemy, and Pluralsight.
🔗 10 of the Best Tensorflow Courses to Learn Machine Learning from Coursera and Udemy
Learn Machine learning with Tensorflow from the best online courses and certifications from Coursera, Udemy, and Pluralsight.
DEV Community
10 Best Tensorflow Online Courses from Coursera and Udemy for Beginners
Learn Machine learning with Tensorflow from the best online courses and certifications from Coursera, Udemy, and Pluralsight.
Massively Multilingual NMT in the wild: 100+ languages, 1B+ parameters, trained using 25B+ examples. Check out our new paper for an in depth analysis:
https://arxiv.org/abs/1907.05019
#GoogleAI
https://arxiv.org/abs/1907.05019
#GoogleAI
arXiv.org
Massively Multilingual Neural Machine Translation in the Wild:...
We introduce our efforts towards building a universal neural machine translation (NMT) system capable of translating between any language pair. We set a milestone towards this goal by building a...
🎥 Reusable Execution in Production Using Papermill (Google Cloud AI Huddle)
👁 1 раз ⏳ 3611 сек.
👁 1 раз ⏳ 3611 сек.
In this episode of Google Cloud AI Huddle, Matthew Seal, Senior Software Engineer at Netflix, goes over the pros and cons of using Jupyter Notebook and Papermill to make a reusable execution for production.
Deep Learning VMs → https://goo.gle/2ZNmmrG
Google AI Huddle is an open, collaborative and developer-first AI forum driven by Google AI expertise. It’s a monthly in-person engagement where Googlers engage with developers to speak on ML topics, deliver workshops / tutorials, and hands-on labs. AI Huddl
Vk
Reusable Execution in Production Using Papermill (Google Cloud AI Huddle)
In this episode of Google Cloud AI Huddle, Matthew Seal, Senior Software Engineer at Netflix, goes over the pros and cons of using Jupyter Notebook and Papermill to make a reusable execution for production.
Deep Learning VMs → https://goo.gle/2ZNmmrG
Google…
Deep Learning VMs → https://goo.gle/2ZNmmrG
Google…
Игра в go использую Сверточные нейронне сети
http://arxiv.org/abs/1907.04658
🔗 Playing Go without Game Tree Search Using Convolutional Neural Networks
The game of Go has a long history in East Asian countries, but the field of Computer Go has yet to catch up to humans until the past couple of years. While the rules of Go are simple, the strategy and combinatorics of the game are immensely complex. Even within the past couple of years, new programs that rely on neural networks to evaluate board positions still explore many orders of magnitude more board positions per second than a professional can. We attempt to mimic human intuition in the game by creating a convolutional neural policy network which, without any sort of tree search, should play the game at or above the level of most humans. We introduce three structures and training methods that aim to create a strong Go player: non-rectangular convolutions, which will better learn the shapes on the board, supervised learning, training on a data set of 53,000 professional games, and reinforcement learning, training on games played between different versions of the network. Our network has already surpassed
http://arxiv.org/abs/1907.04658
🔗 Playing Go without Game Tree Search Using Convolutional Neural Networks
The game of Go has a long history in East Asian countries, but the field of Computer Go has yet to catch up to humans until the past couple of years. While the rules of Go are simple, the strategy and combinatorics of the game are immensely complex. Even within the past couple of years, new programs that rely on neural networks to evaluate board positions still explore many orders of magnitude more board positions per second than a professional can. We attempt to mimic human intuition in the game by creating a convolutional neural policy network which, without any sort of tree search, should play the game at or above the level of most humans. We introduce three structures and training methods that aim to create a strong Go player: non-rectangular convolutions, which will better learn the shapes on the board, supervised learning, training on a data set of 53,000 professional games, and reinforcement learning, training on games played between different versions of the network. Our network has already surpassed
arXiv.org
Playing Go without Game Tree Search Using Convolutional Neural Networks
The game of Go has a long history in East Asian countries, but the field of Computer Go has yet to catch up to humans until the past couple of years. While the rules of Go are simple, the strategy...
Изучаем pandas [2019] Майкл Хейдт, Артем Груздев
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
Библиотека pandas – популярный пакет для анализа и обработки данных на языке Python. Он предлагает эффективные, быстрые, высокопроизводительные структуры данных, которые позволяют существенно упростить работу.
Данная книга познакомит вас с обширным набором инструментов, предлагаемых библиотекой pandas, – начиная с обзора загрузки данных с удаленных источников, выполнения численного и статистического анализа, индексации, агрегации и заканчивая визуализацией данных и анализом финансовой информации.
Во второе издание добавлены новые приложения, посвященные предварительной подготовке данных и настройке гиперпараметров, работе с датами, строками и предупреждениями. Подробно освещены алгоритмы случайного леса, градиентного бустинга CatBoost и логистической регрессии.
Издание предназначено всем разработчикам на языке Python, интересующимся обработкой данных.
📝 2019 Изучаем pandas. Майкл Хейдт, Артем Груздев.pdf - 💾21 816 454
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
Библиотека pandas – популярный пакет для анализа и обработки данных на языке Python. Он предлагает эффективные, быстрые, высокопроизводительные структуры данных, которые позволяют существенно упростить работу.
Данная книга познакомит вас с обширным набором инструментов, предлагаемых библиотекой pandas, – начиная с обзора загрузки данных с удаленных источников, выполнения численного и статистического анализа, индексации, агрегации и заканчивая визуализацией данных и анализом финансовой информации.
Во второе издание добавлены новые приложения, посвященные предварительной подготовке данных и настройке гиперпараметров, работе с датами, строками и предупреждениями. Подробно освещены алгоритмы случайного леса, градиентного бустинга CatBoost и логистической регрессии.
Издание предназначено всем разработчикам на языке Python, интересующимся обработкой данных.
📝 2019 Изучаем pandas. Майкл Хейдт, Артем Груздев.pdf - 💾21 816 454
Обнаружение скрытых и составных параметров в многоуровневых моделях с помощью машинного обучения
https://arxiv.org/abs/1907.05417
🔗 Detecting hidden and composite orders in layered models via machine learning
We use machine learning to study layered spin models where composite order parameters may emerge as a consequence of the interlayerer coupling. We focus on the layered Ising and Ashkin-Teller models, determining their phase diagram via the application of a machine learning algorithm to the Monte Carlo data. Remarkably our technique is able to correctly characterize all the system phases also in the case of hidden order parameters, \emph{i.e.}~order parameters whose expression in terms of the microscopic configurations would require additional preprocessing of the data fed to the algorithm. Within the approach we introduce, owing to the construction of convolutional neural networks, naturally suitable for layered image-like data with arbitrary number of layers, no preprocessing of the Monte Carlo data is needed, also with regard to its spatial structure. The physical meaning of our results is discussed and compared with analytical data, where available. Yet, the method can be used without any \emph{a
https://arxiv.org/abs/1907.05417
🔗 Detecting hidden and composite orders in layered models via machine learning
We use machine learning to study layered spin models where composite order parameters may emerge as a consequence of the interlayerer coupling. We focus on the layered Ising and Ashkin-Teller models, determining their phase diagram via the application of a machine learning algorithm to the Monte Carlo data. Remarkably our technique is able to correctly characterize all the system phases also in the case of hidden order parameters, \emph{i.e.}~order parameters whose expression in terms of the microscopic configurations would require additional preprocessing of the data fed to the algorithm. Within the approach we introduce, owing to the construction of convolutional neural networks, naturally suitable for layered image-like data with arbitrary number of layers, no preprocessing of the Monte Carlo data is needed, also with regard to its spatial structure. The physical meaning of our results is discussed and compared with analytical data, where available. Yet, the method can be used without any \emph{a
arXiv.org
Detecting hidden and composite orders in layered models via...
We use machine learning to study layered spin models where composite order
parameters may emerge as a consequence of the interlayerer coupling. We focus
on the layered Ising and Ashkin-Teller...
parameters may emerge as a consequence of the interlayerer coupling. We focus
on the layered Ising and Ashkin-Teller...