Pre-trained Word Embeddings or Embedding Layer: A Dilemma
🔗 Pre-trained Word Embeddings or Embedding Layer: A Dilemma
A comparison between pre-trained word embeddings and embedding layers on the performance of semantic NLP tasks
🔗 Pre-trained Word Embeddings or Embedding Layer: A Dilemma
A comparison between pre-trained word embeddings and embedding layers on the performance of semantic NLP tasks
Towards Data Science
Pre-trained Word Embeddings or Embedding Layer? — A Dilemma
A comparison between pre-trained word embeddings and embedding layers on the performance of semantic NLP tasks
🎥 Reality Lab Lecture: Andrew Rabinovich
👁 1 раз ⏳ 4647 сек.
👁 1 раз ⏳ 4647 сек.
The Reality Lab Lectures - Tuesday, April 23, 2019
TALK TITLE: Multi Task Learning for Computer Vision
SPEAKER: Andrew Rabinovich (Director of Deep Learning, Head of AI / Magic Leap)
TALK ABSTRACT: Deep multitask networks, in which one neural network produces multiple predictive outputs, are more scalable and often better regularized than their single-task counterparts. Such advantages can potentially lead to gains in both speed and performance, but multitask networks are also difficult to train without f
Vk
Reality Lab Lecture: Andrew Rabinovich
The Reality Lab Lectures - Tuesday, April 23, 2019
TALK TITLE: Multi Task Learning for Computer Vision
SPEAKER: Andrew Rabinovich (Director of Deep Learning, Head of AI / Magic Leap)
TALK ABSTRACT: Deep multitask networks, in which one neural network produces…
TALK TITLE: Multi Task Learning for Computer Vision
SPEAKER: Andrew Rabinovich (Director of Deep Learning, Head of AI / Magic Leap)
TALK ABSTRACT: Deep multitask networks, in which one neural network produces…
🎥 Machine Learning Tutorial Chap 5| Part-3 L2 Regularization in Machine Learning | GreyAtom
👁 1 раз ⏳ 939 сек.
👁 1 раз ⏳ 939 сек.
Welcome to the #DataScienceFridays Rohit Ghosh, a deep learning scientist, and an Instructor at GreyAtom will take us through polynomial regression in machine learning through a simple introduction series.
Regularization is a way to avoid overfitting by penalizing high-valued regression coefficients. In simple terms, it reduces parameters and shrinks (simplifies) the model. This more streamlined, more parsimonious model will likely perform better at predictions. Regularization adds penalties to more compl
Vk
Machine Learning Tutorial Chap 5| Part-3 L2 Regularization in Machine Learning | GreyAtom
Welcome to the #DataScienceFridays Rohit Ghosh, a deep learning scientist, and an Instructor at GreyAtom will take us through polynomial regression in machine learning through a simple introduction series.
Regularization is a way to avoid overfitting by…
Regularization is a way to avoid overfitting by…
🎥 JOTB19 - Getting started with Deep Reinforcement Learning by Nicolas Kuhaupt
👁 1 раз ⏳ 2429 сек.
👁 1 раз ⏳ 2429 сек.
Reinforcement Learning is a hot topic in Artificial Intelligence (AI) at the moment with the most prominent example of AlphaGo Zero. It shifted the boundaries of what was believed to be possible with AI. In this talk, we will have a look into Reinforcement Learning and its implementation.
Reinforcement Learning is a class of algorithms which trains an agent to act optimally in an environment. The most prominent example is AlphaGo Zero, where the agent is trained to place tokens on the board of Go in order
Vk
JOTB19 - Getting started with Deep Reinforcement Learning by Nicolas Kuhaupt
Reinforcement Learning is a hot topic in Artificial Intelligence (AI) at the moment with the most prominent example of AlphaGo Zero. It shifted the boundaries of what was believed to be possible with AI. In this talk, we will have a look into Reinforcement…
Adaptive Gradient-Based Meta-Learning Methods
https://arxiv.org/abs/1906.02717
🔗 Adaptive Gradient-Based Meta-Learning Methods
We build a theoretical framework for understanding practical meta-learning methods that enables the integration of sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms. Our approach enables the task-similarity to be learned adaptively, provides sharper transfer-risk bounds in the setting of statistical learning-to-learn, and leads to straightforward derivations of average-case regret bounds for efficient algorithms in settings where the task-environment changes dynamically or the tasks share a certain geometric structure. We use our theory to modify several popular meta-learning algorithms and improve their training and meta-test-time performance on standard problems in few-shot and federated deep learning.
https://arxiv.org/abs/1906.02717
🔗 Adaptive Gradient-Based Meta-Learning Methods
We build a theoretical framework for understanding practical meta-learning methods that enables the integration of sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms. Our approach enables the task-similarity to be learned adaptively, provides sharper transfer-risk bounds in the setting of statistical learning-to-learn, and leads to straightforward derivations of average-case regret bounds for efficient algorithms in settings where the task-environment changes dynamically or the tasks share a certain geometric structure. We use our theory to modify several popular meta-learning algorithms and improve their training and meta-test-time performance on standard problems in few-shot and federated deep learning.
Каждый сервис, чьи пользователи могут создавать собственный контент (UGC — User-generated content), вынужден не только решать бизнес-задачи, но и наводить порядок в UGC.
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
Плохая или некачественная модерация контента в итоге может уменьшить привлекательность сервиса для пользователей, вплоть до прекращения его работы.
Сегодня мы вам расскажем про синергию между Юлой и Одноклассниками, которая помогает нам эффективно модерировать объявления в Юле.
Синергия вообще штука очень полезная, а в современном мире, когда технологии и тренды меняются очень быстро, она может превратиться в палочку-выручалочку. Зачем тратить дефицитные ресурсы и время на изобретение того, что до тебя уже изобрели и довели до ума?
Так же подумали и мы, когда перед нами во весь рост встала задача модерации пользовательского контента — картинок, текста и ссылок. Наши пользователи каждый день загружают в Юлу миллионы единиц контента, и без автоматической обработки промодерировать все эти данные вручную вообще не реально.
Поэтому мы воспользовались уже готовой платформой модерации, которую к тому времени наши коллеги из Одноклассников допилили до состояния «почти совершенство».
https://habr.com/ru/company/youla/blog/455128/
🔗 Как мы модерируем объявления
Каждый сервис, чьи пользователи могут создавать собственный контент (UGC — User-generated content), вынужден не только решать бизнес-задачи, но и наводить поря...
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
Плохая или некачественная модерация контента в итоге может уменьшить привлекательность сервиса для пользователей, вплоть до прекращения его работы.
Сегодня мы вам расскажем про синергию между Юлой и Одноклассниками, которая помогает нам эффективно модерировать объявления в Юле.
Синергия вообще штука очень полезная, а в современном мире, когда технологии и тренды меняются очень быстро, она может превратиться в палочку-выручалочку. Зачем тратить дефицитные ресурсы и время на изобретение того, что до тебя уже изобрели и довели до ума?
Так же подумали и мы, когда перед нами во весь рост встала задача модерации пользовательского контента — картинок, текста и ссылок. Наши пользователи каждый день загружают в Юлу миллионы единиц контента, и без автоматической обработки промодерировать все эти данные вручную вообще не реально.
Поэтому мы воспользовались уже готовой платформой модерации, которую к тому времени наши коллеги из Одноклассников допилили до состояния «почти совершенство».
https://habr.com/ru/company/youla/blog/455128/
🔗 Как мы модерируем объявления
Каждый сервис, чьи пользователи могут создавать собственный контент (UGC — User-generated content), вынужден не только решать бизнес-задачи, но и наводить поря...
Хабр
Как мы модерируем объявления
Каждый сервис, чьи пользователи могут создавать собственный контент (UGC — User-generated content), вынужден не только решать бизнес-задачи, но и наводить порядок в UGC. Плохая или некачественная...
This is not the official GPT2 implementation!
An implementation of training for GPT2 that supports both GPUs and TPUs. The dataset scripts are a bit hack-y and will probably need to be adapted to your needs.
https://github.com/ConnorJL/GPT2
🔗 ConnorJL/GPT2
An implementation of training for GPT2, supports TPUs - ConnorJL/GPT2
An implementation of training for GPT2 that supports both GPUs and TPUs. The dataset scripts are a bit hack-y and will probably need to be adapted to your needs.
https://github.com/ConnorJL/GPT2
🔗 ConnorJL/GPT2
An implementation of training for GPT2, supports TPUs - ConnorJL/GPT2
GitHub
GitHub - ConnorJL/GPT2: An implementation of training for GPT2, supports TPUs
An implementation of training for GPT2, supports TPUs - ConnorJL/GPT2
🎥 Privacy in Machine Learning by Jason Mancuso (5/22/2019)
👁 1 раз ⏳ 4722 сек.
👁 1 раз ⏳ 4722 сек.
Jason Mancuso of Dropout Labs talks about the latest developments in machine learning privacy at the Cleveland R User Group. Topics include Differential Privacy, Encrypted Data, Federated Models, the tf-encrypted library, and more.
Vk
Privacy in Machine Learning by Jason Mancuso (5/22/2019)
Jason Mancuso of Dropout Labs talks about the latest developments in machine learning privacy at the Cleveland R User Group. Topics include Differential Privacy, Encrypted Data, Federated Models, the tf-encrypted library, and more.
Продвинутый поток 2018
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
Продвинутый поток: NLP. Introduction and Word Embeddings
Продвинутый поток: Adversarial attacks. Introduction and ConvNets
Продвинутый поток: NLP. RNN, GRU, LSTM. Фреймворк Visdom
Продвинутый поток: Adversarial attacks. CNN architectures
Продвинутый поток: NLP & Visualization. Заключительное занятие
Продвинутый поток: Adversarial attacks. Black-box атаки
Продвинутый поток: Adversarial Attacks. White-box атаки
Продвинутый поток: NLP. Классификация текстов с помощью CNN
🎥 Продвинутый поток: NLP. Introduction and Word Embeddings
👁 154 раз ⏳ 3140 сек.
🎥 Продвинутый поток: Adversarial attacks. Introduction and ConvNets
👁 112 раз ⏳ 6446 сек.
🎥 Продвинутый поток: NLP. RNN, GRU, LSTM. Фреймворк Visdom
👁 70 раз ⏳ 4448 сек.
🎥 Продвинутый поток: Adversarial attacks. CNN architectures
👁 66 раз ⏳ 6539 сек.
🎥 Продвинутый поток: NLP & Visualization. Заключительное занятие
👁 10 раз ⏳ 3219 сек.
🎥 Продвинутый поток: Adversarial attacks. Black-box атаки
👁 10 раз ⏳ 5073 сек.
🎥 Продвинутый поток: Adversarial Attacks. White-box атаки
👁 31 раз ⏳ 6483 сек.
🎥 Продвинутый поток: NLP. Классификация текстов с помощью CNN
👁 30 раз ⏳ 3264 сек.
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
Продвинутый поток: NLP. Introduction and Word Embeddings
Продвинутый поток: Adversarial attacks. Introduction and ConvNets
Продвинутый поток: NLP. RNN, GRU, LSTM. Фреймворк Visdom
Продвинутый поток: Adversarial attacks. CNN architectures
Продвинутый поток: NLP & Visualization. Заключительное занятие
Продвинутый поток: Adversarial attacks. Black-box атаки
Продвинутый поток: Adversarial Attacks. White-box атаки
Продвинутый поток: NLP. Классификация текстов с помощью CNN
🎥 Продвинутый поток: NLP. Introduction and Word Embeddings
👁 154 раз ⏳ 3140 сек.
Первое занятие проекта "NLP: Understanding and Visualization" в продвинутом потоке.
Занятие ведёт Иван Провилков (ФИВТ МФТИ).
Материалы семинара:...
🎥 Продвинутый поток: Adversarial attacks. Introduction and ConvNets
👁 112 раз ⏳ 6446 сек.
Первое занятие курса, посвящённого adversarial attacks -- поиску уязвимостей в работе нейросетей и борьбы с ними.
Рассказывается об актуальности э...
🎥 Продвинутый поток: NLP. RNN, GRU, LSTM. Фреймворк Visdom
👁 70 раз ⏳ 4448 сек.
Второе занятие проекта "NLP: Understanding and Visualization" в продвинутом потоке.
Рассказывается про Recurrent Neural Networks, Gated Recurrent...
🎥 Продвинутый поток: Adversarial attacks. CNN architectures
👁 66 раз ⏳ 6539 сек.
Второе занятие курса, посвящённого adversarial attacks -- поиску уязвимостей в работе нейросетей и борьбы с ними.
Подробно рассматриваются наиболе...
🎥 Продвинутый поток: NLP & Visualization. Заключительное занятие
👁 10 раз ⏳ 3219 сек.
Последнее занятие мини-курса по NLP и его визуализации.
В начале занятия Иван Провилков (ФИВТ МФТИ) разбирает презентации участников проекта.
Да...
🎥 Продвинутый поток: Adversarial attacks. Black-box атаки
👁 10 раз ⏳ 5073 сек.
На этом занятии Сергей Червонцев (ФИВТ МФТИ) расскажет о том, как сломать нейросеть, если её архитектура неизвестна -- так называемая black-box ата...
🎥 Продвинутый поток: Adversarial Attacks. White-box атаки
👁 31 раз ⏳ 6483 сек.
Третье занятие курса, посвящённого adversarial attacks -- поиску уязвимостей в работе нейросетей и борьбы с ними.
Подробно рассматриваются white-b...
🎥 Продвинутый поток: NLP. Классификация текстов с помощью CNN
👁 30 раз ⏳ 3264 сек.
Третье занятие проекта "NLP: Understanding and Visualization" в продвинутом потоке.
Рассказывается о том, как решается задача классификации текст...
Vk
Продвинутый поток: NLP. Introduction and Word Embeddings
Первое занятие проекта "NLP: Understanding and Visualization" в продвинутом потоке. Занятие ведёт Иван Провилков (ФИВТ МФТИ). Материалы семинара:...
Regularization for Neural Networks with Framingham Case Study
🔗 Regularization for Neural Networks with Framingham Case Study
L1, L2, elastic net, and group lasso regularization
🔗 Regularization for Neural Networks with Framingham Case Study
L1, L2, elastic net, and group lasso regularization
Medium
Regularization for Neural Networks with Framingham Case Study
Rachel Lea Ballantyne DraelosJun 8 · 14 min read
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
🔗 ReFocus: Making Out-of-Focus Microscopy Images In-Focus Again
Microscopy images are widely used for the diagnosis of various diseases such as infections and cancers. Furthermore, they facilitate basic…
🔗 ReFocus: Making Out-of-Focus Microscopy Images In-Focus Again
Microscopy images are widely used for the diagnosis of various diseases such as infections and cancers. Furthermore, they facilitate basic…
Towards Data Science
ReFocus: Making Out-of-Focus Microscopy Images In-Focus Again
Microscopy images are widely used for the diagnosis of various diseases such as infections and cancers. Furthermore, they facilitate basic…
#AI #ArtificialIntelligence #DeepLearning #MachineLearning
#CVPR2019 in one link. Enjoy woth up-to-date research papers and hot
http://openaccess.thecvf.com/content_CVPR_2019/html/
🔗 Index of /content_CVPR_2019/html
#CVPR2019 in one link. Enjoy woth up-to-date research papers and hot
http://openaccess.thecvf.com/content_CVPR_2019/html/
🔗 Index of /content_CVPR_2019/html
Unsupervised Co-Learning on G-Manifolds Across Irreducible Representations
Authors: Yifeng Fan, Tingran Gao, Zhizhen Zhao
Abstract: We introduce a novel co-learning paradigm for manifolds naturally equipped with a group action
🔗 Unsupervised Co-Learning on $\mathcal{G}$-Manifolds Across Irreducible Representations
We introduce a novel co-learning paradigm for manifolds naturally equipped with a group action, motivated by recent developments on learning a manifold from attached fibre bundle structures. We utilize a representation theoretic mechanism that canonically associates multiple independent vector bundles over a common base manifold, which provides multiple views for the geometry of the underlying manifold. The consistency across these fibre bundles provide a common base for performing unsupervised manifold co-learning through the redundancy created artificially across irreducible representations of the transformation group. We demonstrate the efficacy of the proposed algorithmic paradigm through drastically improved robust nearest neighbor search and community detection on rotation-invariant cryo-electron microscopy image analysis.
Authors: Yifeng Fan, Tingran Gao, Zhizhen Zhao
Abstract: We introduce a novel co-learning paradigm for manifolds naturally equipped with a group action
🔗 Unsupervised Co-Learning on $\mathcal{G}$-Manifolds Across Irreducible Representations
We introduce a novel co-learning paradigm for manifolds naturally equipped with a group action, motivated by recent developments on learning a manifold from attached fibre bundle structures. We utilize a representation theoretic mechanism that canonically associates multiple independent vector bundles over a common base manifold, which provides multiple views for the geometry of the underlying manifold. The consistency across these fibre bundles provide a common base for performing unsupervised manifold co-learning through the redundancy created artificially across irreducible representations of the transformation group. We demonstrate the efficacy of the proposed algorithmic paradigm through drastically improved robust nearest neighbor search and community detection on rotation-invariant cryo-electron microscopy image analysis.
June Edition: Probability, Statistics & Machine Learning
🔗 June Edition: Probability, Statistics & Machine Learning
Everyone wants to be in the field of Data Science and Analytics as it’s challenging, fascinating as well as rewarding. You have to be…
🔗 June Edition: Probability, Statistics & Machine Learning
Everyone wants to be in the field of Data Science and Analytics as it’s challenging, fascinating as well as rewarding. You have to be…
Towards Data Science
June Edition: Probability, Statistics & Machine Learning
Everyone wants to be in the field of Data Science and Analytics as it’s challenging, fascinating as well as rewarding. You have to be…
Kernel Secrets in Machine Learning
🔗 Kernel Secrets in Machine Learning
This post is not about deep learning. But it could be might as well. This is the power of kernels. They are universally applicable in any…
🔗 Kernel Secrets in Machine Learning
This post is not about deep learning. But it could be might as well. This is the power of kernels. They are universally applicable in any…
Towards Data Science
Kernel Secrets in Machine Learning
This post is not about deep learning. But it could be might as well. This is the power of kernels. They are universally applicable in any…
Введение в свёрточные нейронные сети (Convolutional Neural Networks)
🔗 Введение в свёрточные нейронные сети (Convolutional Neural Networks)
Полный курс на русском языке можно найти по этой ссылке. Оригинальный курс на английском доступен по этой ссылке. Выход новых лекций запланирован каждые 2-3 дн...
🔗 Введение в свёрточные нейронные сети (Convolutional Neural Networks)
Полный курс на русском языке можно найти по этой ссылке. Оригинальный курс на английском доступен по этой ссылке. Выход новых лекций запланирован каждые 2-3 дн...
Хабр
Введение в свёрточные нейронные сети (Convolutional Neural Networks)
Полный курс на русском языке можно найти по этой ссылке. Оригинальный курс на английском доступен по этой ссылке. Выход новых лекций запланирован каждые 2-3 д...
🎥 Tensorflow Math Operations using Constants: Tensorflow Tutorial Series
👁 1 раз ⏳ 1497 сек.
👁 1 раз ⏳ 1497 сек.
Tensorflow Math Operations using Constants: Tensorflow Tutorial Series
Welcome to "The AI University".
Subtitles available in: Hindi, English, French
About this video:
This video explains explaining basic mathematical operation i.e. how to perform mathematical operations in Tensorflow. It also explains Directed Acyclic Graphs as well as setting up operations in tensorflow and how it works.
FOLLOW ME ON:
Twitter: https://twitter.com/theaiuniverse
Facebook : https://www.facebook.com/theaiuniv
Vk
Tensorflow Math Operations using Constants: Tensorflow Tutorial Series
Tensorflow Math Operations using Constants: Tensorflow Tutorial Series
Welcome to "The AI University".
Subtitles available in: Hindi, English, French
About this video:
This video explains explaining basic mathematical operation i.e. how to perform mathematical…
Welcome to "The AI University".
Subtitles available in: Hindi, English, French
About this video:
This video explains explaining basic mathematical operation i.e. how to perform mathematical…
One-Shot Learning with Siamese Networks, Contrastive Loss, and Triplet Loss for Face Recognition
https://machinelearningmastery.com/one-shot-learning-with-siamese-networks-contrastive-and-triplet-loss-for-face-recognition/
🔗 One-Shot Learning with Siamese Networks, Contrastive Loss, and Triplet Loss for Face Recognition
One-shot learning is a classification task where one, or a few, examples are used to classify many new examples in the future. This characterizes tasks seen in the field of face recognition, such as face identification and face verification, where people must be classified correctly with different facial expressions, lighting conditions, accessories, and hairstyles given …
https://machinelearningmastery.com/one-shot-learning-with-siamese-networks-contrastive-and-triplet-loss-for-face-recognition/
🔗 One-Shot Learning with Siamese Networks, Contrastive Loss, and Triplet Loss for Face Recognition
One-shot learning is a classification task where one, or a few, examples are used to classify many new examples in the future. This characterizes tasks seen in the field of face recognition, such as face identification and face verification, where people must be classified correctly with different facial expressions, lighting conditions, accessories, and hairstyles given …
🎥 ML-7 Reinforcement Learning: from Foundations to State-of-the-Art
👁 1 раз ⏳ 1778 сек.
👁 1 раз ⏳ 1778 сек.
Dr. Daniel Urieli
Vk
ML-7 Reinforcement Learning: from Foundations to State-of-the-Art
Dr. Daniel Urieli
Strategies for Global Optimization
🔗 Strategies for Global Optimization
Local and global optimization is usually known and somewhat ignored once we leave high school calculus. For a quick review, take the cover…
🔗 Strategies for Global Optimization
Local and global optimization is usually known and somewhat ignored once we leave high school calculus. For a quick review, take the cover…
Towards Data Science
Strategies for Global Optimization
Local and global optimization is usually known and somewhat ignored once we leave high school calculus. For a quick review, take the cover…