BoTorch is a library for Bayesian Optimization built on PyTorch.
Facebook open-sources Ax and BoTorch to simplify AI model optimization
github: https://github.com/pytorch/botorch
description: https://techcrunch.com/2019/05/01/facebook-open-sources-ax-and-botorch-to-simplify-ai-model-optimization/
Facebook open-sources Ax and BoTorch to simplify AI model optimization
github: https://github.com/pytorch/botorch
description: https://techcrunch.com/2019/05/01/facebook-open-sources-ax-and-botorch-to-simplify-ai-model-optimization/
GitHub
GitHub - pytorch/botorch: Bayesian optimization in PyTorch
Bayesian optimization in PyTorch. Contribute to pytorch/botorch development by creating an account on GitHub.
Announcing Google-Landmarks-v2: An Improved Dataset for Landmark Recognition & Retrieval
http://ai.googleblog.com/2019/05/announcing-google-landmarks-v2-improved.html
http://ai.googleblog.com/2019/05/announcing-google-landmarks-v2-improved.html
research.google
Announcing Google-Landmarks-v2: An Improved Dataset for Landmark Recognition & R
Posted by Bingyi Cao and Tobias Weyand, Software Engineers, Google AI Last year we released Google-Landmarks, the largest world-wide landmark rec...
Best Practices for Preparing and Augmenting Image Data for Convolutional Neural Networks
https://machinelearningmastery.com/best-practices-for-preparing-and-augmenting-image-data-for-convolutional-neural-networks/
https://machinelearningmastery.com/best-practices-for-preparing-and-augmenting-image-data-for-convolutional-neural-networks/
MachineLearningMastery.com
Best Practices for Preparing and Augmenting Image Data for CNNs - MachineLearningMastery.com
It is challenging to know how to best prepare image data when training a convolutional neural network. This involves both scaling the pixel values and use of image data augmentation techniques during both the training and evaluation of the model. Instead…
Reinforcement Learning, Fast and Slow
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(19)30061-0
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(19)30061-0
Billion-scale semi-supervised learning for image classification"
Weakly-supervised pre-training + semi-supervised pre-training + distillation + transfer/fine-tuning =81.2% twith ResNet-50, 84.8% with ResNeXt-101-32x16, top-1 accuracy on ImageNet.
article: https://arxiv.org/abs/1905.00546
announce : https://www.facebook.com/i.zeki.yalniz/posts/10157311492509962
Weakly-supervised pre-training + semi-supervised pre-training + distillation + transfer/fine-tuning =81.2% twith ResNet-50, 84.8% with ResNeXt-101-32x16, top-1 accuracy on ImageNet.
article: https://arxiv.org/abs/1905.00546
announce : https://www.facebook.com/i.zeki.yalniz/posts/10157311492509962
arXiv.org
Billion-scale semi-supervised learning for image classification
This paper presents a study of semi-supervised learning with large convolutional networks. We propose a pipeline, based on a teacher/student paradigm, that leverages a large collection of...
How to Visualize Filters and Feature Maps in Convolutional Neural Networks
https://machinelearningmastery.com/how-to-visualize-filters-and-feature-maps-in-convolutional-neural-networks/
https://machinelearningmastery.com/how-to-visualize-filters-and-feature-maps-in-convolutional-neural-networks/
Огромный открытый датасет русской речи
https://habr.com/ru/post/450760/
https://habr.com/ru/post/450760/
Хабр
Огромный открытый датасет русской речи
Специалистам по распознаванию речи давно не хватало большого открытого корпуса устной русской речи, поэтому только крупные компании могли позволить себе заниматься этой задачей, но они не...
Forwarded from Artificial Intelligence
Google at ICLR 2019
http://ai.googleblog.com/2019/05/google-at-iclr-2019.html
http://ai.googleblog.com/2019/05/google-at-iclr-2019.html
Googleblog
Google at ICLR 2019
How to Develop a Convolutional Neural Network From Scratch for MNIST Handwritten Digit Classification
https://machinelearningmastery.com/blog/
https://machinelearningmastery.com/blog/
Искусственный интеллект на примере простой игры. Часть 2
https://habr.com/ru/post/451070/
https://habr.com/ru/post/451070/
Хабр
Обучаем нейросеть играть в «Змейку» и пишем сервер для соревнований
В этот раз выбрана игра «Змейка». Создана библиотека для нейросети на языке Go. Найден принцип обучения, зависимый от «глубины» памяти. Написан сервер для игр...
Forwarded from Artificial Intelligence
Announcing Open Images V5 and the ICCV 2019 Open Images Challenge
http://ai.googleblog.com/2019/05/announcing-open-images-v5-and-iccv-2019.html
http://ai.googleblog.com/2019/05/announcing-open-images-v5-and-iccv-2019.html
research.google
Announcing Open Images V5 and the ICCV 2019 Open Images Challenge
Posted by Vittorio Ferrari, Research Scientist, Machine Perception In 2016, we introduced Open Images, a collaborative release of ~9 million imag...
Digging Into Self-Supervised Monocular Depth Estimation
Article.: https://arxiv.org/abs/1806.01260
Github: https://github.com/nianticlabs/monodepth2
Article.: https://arxiv.org/abs/1806.01260
Github: https://github.com/nianticlabs/monodepth2
arXiv.org
Digging Into Self-Supervised Monocular Depth Estimation
Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform...
How to Develop a Deep Convolutional Neural Network From Scratch for Fashion MNIST Clothing Classification
https://machinelearningmastery.com/how-to-develop-a-cnn-from-scratch-for-fashion-mnist-clothing-classification/
https://machinelearningmastery.com/how-to-develop-a-cnn-from-scratch-for-fashion-mnist-clothing-classification/
ICLR2019 Best Paper
"The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"
Jonathan Frankle, Michael Carbin: https://arxiv.org/abs/1803.03635
"The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"
Jonathan Frankle, Michael Carbin: https://arxiv.org/abs/1803.03635
arXiv.org
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without...
Introducing TensorFlow Graphics: Computer Graphics Meets Deep Learning
Github : https://github.com/tensorflow/graphics
Article: https://medium.com/tensorflow/introducing-tensorflow-graphics-computer-graphics-meets-deep-learning-c8e3877b7668
Github : https://github.com/tensorflow/graphics
Article: https://medium.com/tensorflow/introducing-tensorflow-graphics-computer-graphics-meets-deep-learning-c8e3877b7668
GitHub
GitHub - tensorflow/graphics: TensorFlow Graphics: Differentiable Graphics Layers for TensorFlow
TensorFlow Graphics: Differentiable Graphics Layers for TensorFlow - tensorflow/graphics
PyTorch implementation of the Leap Meta-Learner
Article: https://arxiv.org/abs/1812.01054
GitHub: https://github.com/amzn/metalearn-leap
Article: https://arxiv.org/abs/1812.01054
GitHub: https://github.com/amzn/metalearn-leap
arXiv.org
Transferring Knowledge across Learning Processes
In complex transfer learning scenarios new tasks might not be tightly linked
to previous tasks. Approaches that transfer information contained only in the
final parameters of a source model will...
to previous tasks. Approaches that transfer information contained only in the
final parameters of a source model will...
How to Develop a Convolutional Neural Network From Scratch for CIFAR-10 Photo Classification
https://machinelearningmastery.com/how-to-develop-a-cnn-from-scratch-for-cifar-10-photo-classification/
https://machinelearningmastery.com/how-to-develop-a-cnn-from-scratch-for-cifar-10-photo-classification/
Unified Language Model Pre-training for Natural Language Understanding and Generation
Articel.: https://arxiv.org/abs/1905.03197v1
Articel.: https://arxiv.org/abs/1905.03197v1
arXiv.org
Unified Language Model Pre-training for Natural Language...
This paper presents a new Unified pre-trained Language Model (UniLM) that can
be fine-tuned for both natural language understanding and generation tasks. The
model is pre-trained using three types...
be fine-tuned for both natural language understanding and generation tasks. The
model is pre-trained using three types...