Основы Natural Language Processing для текста
🔗 Основы Natural Language Processing для текста
Обработка естественного языка сейчас не используются разве что в совсем консервативных отраслях. В большинстве технологических решений распознавание и обработка...
🔗 Основы Natural Language Processing для текста
Обработка естественного языка сейчас не используются разве что в совсем консервативных отраслях. В большинстве технологических решений распознавание и обработка...
Хабр
Основы Natural Language Processing для текста
Обработка естественного языка сейчас не используются разве что в совсем консервативных отраслях. В большинстве технологических решений распознавание и обработка «человеческих» языков давно...
Вышел ML.NET 1.0 RC. Что нового?
🔗 Вышел ML.NET 1.0 RC. Что нового?
ML.NET — это кроссплатформенная среда машинного обучения с открытым исходным кодом (Windows, Linux, macOS) для разработчиков .NET. Работая с ML.NET, разработчики...
🔗 Вышел ML.NET 1.0 RC. Что нового?
ML.NET — это кроссплатформенная среда машинного обучения с открытым исходным кодом (Windows, Linux, macOS) для разработчиков .NET. Работая с ML.NET, разработчики...
Хабр
Вышел ML.NET 1.0 RC. Что нового?
ML.NET — это кроссплатформенная среда машинного обучения с открытым исходным кодом (Windows, Linux, macOS) для разработчиков .NET. Работая с ML.NET, разработчики могут использовать существующие...
Зачем нужен нейрон смещения
🎥 Нейронная сеть. Часть 4 . Нейрон смещения.
👁 31 раз ⏳ 345 сек.
🎥 Нейронная сеть. Часть 4 . Нейрон смещения.
👁 31 раз ⏳ 345 сек.
Делюсь своим пониманием, для чего нужен нейрон смещения
Моя группа ВКонтакте https://vk.com/electronics_nn
Мой кошелёк на яндекс-деньги для желающ...
Vk
Нейронная сеть. Часть 4 . Нейрон смещения.
Делюсь своим пониманием, для чего нужен нейрон смещения Моя группа ВКонтакте https://vk.com/electronics_nn Мой кошелёк на яндекс-деньги для желающ...
Mathematical programming — a key habit to built up for advancing in data science
🔗 Mathematical programming — a key habit to built up for advancing in data science
We show a small step towards building the habit of mathematical programming, a key skill in the repertoire of a budding data scientist.
🔗 Mathematical programming — a key habit to built up for advancing in data science
We show a small step towards building the habit of mathematical programming, a key skill in the repertoire of a budding data scientist.
Towards Data Science
Mathematical programming — a key habit to built up for advancing in data science
We show a small step towards building the habit of mathematical programming, a key skill in the repertoire of a budding data scientist.
Generating Images with Autoencoders
🔗 Generating Images with Autoencoders
In the following weeks, I will post a series of tutorials giving comprehensive introductions into unsupervised and self-supervised…
🔗 Generating Images with Autoencoders
In the following weeks, I will post a series of tutorials giving comprehensive introductions into unsupervised and self-supervised…
Towards Data Science
Generating Images with Autoencoders
In the following weeks, I will post a series of tutorials giving comprehensive introductions into unsupervised and self-supervised…
Laurent Picard - Building smarter apps with Machine Learning, from magic to reality
🔗 Laurent Picard - Building smarter apps with Machine Learning, from magic to reality
Any sufficiently advanced technology is indistinguishable from magic (Arthur C Clarke). Well, Machine Learning can look like magic to most of us, but you don...
🔗 Laurent Picard - Building smarter apps with Machine Learning, from magic to reality
Any sufficiently advanced technology is indistinguishable from magic (Arthur C Clarke). Well, Machine Learning can look like magic to most of us, but you don...
YouTube
Laurent Picard - Building smarter apps with Machine Learning, from magic to reality
Any sufficiently advanced technology is indistinguishable from magic (Arthur C Clarke). Well, Machine Learning can look like magic to most of us, but you don...
🎥 Applied Machine Learning 2019 - Lecture 20 - Neural Networks
👁 1 раз ⏳ 4002 сек.
👁 1 раз ⏳ 4002 сек.
Introduction to neural networks
Autograd
GPU acceleration
Deep learning frameworks
Vk
Applied Machine Learning 2019 - Lecture 20 - Neural Networks
Introduction to neural networks Autograd GPU acceleration Deep learning frameworks
🎥 Stanford CS234: Reinforcement Learning | Winter 2019 | Lecture 6 - CNNs and Deep Q Learning
👁 1 раз ⏳ 4771 сек.
👁 1 раз ⏳ 4771 сек.
Professor Emma Brunskill, Stanford University
http://onlinehub.stanford.edu/
Professor Emma Brunskill
Assistant Professor, Computer Science
Stanford AI for Human Impact Lab
Stanford Artificial Intelligence Lab
Statistical Machine Learning Group
To follow along with the course schedule and syllabus, visit: http://web.stanford.edu/class/cs234/index.html
To get the latest news on Stanford’s upcoming professional programs in Artificial Intelligence, visit: http://learn.stanford.edu/AI.html
To view a
Vk
Stanford CS234: Reinforcement Learning | Winter 2019 | Lecture 6 - CNNs and Deep Q Learning
Professor Emma Brunskill, Stanford University
http://onlinehub.stanford.edu/
Professor Emma Brunskill
Assistant Professor, Computer Science
Stanford AI for Human Impact Lab
Stanford Artificial Intelligence Lab
Statistical Machine Learning Group
To…
http://onlinehub.stanford.edu/
Professor Emma Brunskill
Assistant Professor, Computer Science
Stanford AI for Human Impact Lab
Stanford Artificial Intelligence Lab
Statistical Machine Learning Group
To…
🎥 Машинное обучение в Яндексе
👁 2 раз ⏳ 4885 сек.
👁 2 раз ⏳ 4885 сек.
Александр Крайнов, Руководитель лаборатории машинного интеллекта компании "Яндекс"
Vk
Машинное обучение в Яндексе
Александр Крайнов, Руководитель лаборатории машинного интеллекта компании "Яндекс"
https://arxiv.org/abs/1903.08855
🔗 Linguistic Knowledge and Transferability of Contextual Representations
Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of language. To shed light on the linguistic knowledge they capture, we study the representations produced by several recent pretrained contextualizers (variants of ELMo, the OpenAI transformer language model, and BERT) with a suite of sixteen diverse probing tasks. We find that linear models trained on top of frozen contextual representations are competitive with state-of-the-art task-specific models in many cases, but fail on tasks requiring fine-grained linguistic knowledge (e.g., conjunct identification). To investigate the transferability of contextual word representations, we quantify differences in the transferability of individual layers within contextualizers, especially between recurrent neural networks (RNNs) and transformers. For instance, higher layers of RNNs are more task-specific, while transformer layers do not exhibit the same monotonic trend. In addition, to better understand what makes contextual word representations transferable, we compare language model pretraining with eleven supervised pretraining tasks. For any given task, pretraining on a closely related task yields better performance than language model pretraining (which is better on average) when the pretraining dataset is fixed. However, language model pretraining on more data gives the best results.
🔗 Linguistic Knowledge and Transferability of Contextual Representations
Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of language. To shed light on the linguistic knowledge they capture, we study the representations produced by several recent pretrained contextualizers (variants of ELMo, the OpenAI transformer language model, and BERT) with a suite of sixteen diverse probing tasks. We find that linear models trained on top of frozen contextual representations are competitive with state-of-the-art task-specific models in many cases, but fail on tasks requiring fine-grained linguistic knowledge (e.g., conjunct identification). To investigate the transferability of contextual word representations, we quantify differences in the transferability of individual layers within contextualizers, especially between recurrent neural networks (RNNs) and transformers. For instance, higher layers of RNNs are more task-specific, while transformer layers do not exhibit the same monotonic trend. In addition, to better understand what makes contextual word representations transferable, we compare language model pretraining with eleven supervised pretraining tasks. For any given task, pretraining on a closely related task yields better performance than language model pretraining (which is better on average) when the pretraining dataset is fixed. However, language model pretraining on more data gives the best results.
arXiv.org
Linguistic Knowledge and Transferability of Contextual Representations
Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of...
https://arxiv.org/abs/1904.06236
🔗 Multimodal Machine Learning-based Knee Osteoarthritis Progression Prediction from Plain Radiographs and Clinical Data
Knee osteoarthritis (OA) is the most common musculoskeletal disease without a cure, and current treatment options are limited to symptomatic relief. Prediction of OA progression is a very challenging and timely issue, and it could, if resolved, accelerate the disease modifying drug development and ultimately help to prevent millions of total joint replacement surgeries performed annually. Here, we present a multi-modal machine learning-based OA progression prediction model that utilizes raw radiographic data, clinical examination results and previous medical history of the patient. We validated this approach on an independent test set of 3,918 knee images from 2,129 subjects. Our method yielded area under the ROC curve (AUC) of 0.79 (0.78-0.81) and Average Precision (AP) of 0.68 (0.66-0.70). In contrast, a reference approach, based on logistic regression, yielded AUC of 0.75 (0.74-0.77) and AP of 0.62 (0.60-0.64). The proposed method could significantly improve the subject selection process for OA drug-development trials and help the development of personalized therapeutic plans.
🔗 Multimodal Machine Learning-based Knee Osteoarthritis Progression Prediction from Plain Radiographs and Clinical Data
Knee osteoarthritis (OA) is the most common musculoskeletal disease without a cure, and current treatment options are limited to symptomatic relief. Prediction of OA progression is a very challenging and timely issue, and it could, if resolved, accelerate the disease modifying drug development and ultimately help to prevent millions of total joint replacement surgeries performed annually. Here, we present a multi-modal machine learning-based OA progression prediction model that utilizes raw radiographic data, clinical examination results and previous medical history of the patient. We validated this approach on an independent test set of 3,918 knee images from 2,129 subjects. Our method yielded area under the ROC curve (AUC) of 0.79 (0.78-0.81) and Average Precision (AP) of 0.68 (0.66-0.70). In contrast, a reference approach, based on logistic regression, yielded AUC of 0.75 (0.74-0.77) and AP of 0.62 (0.60-0.64). The proposed method could significantly improve the subject selection process for OA drug-development trials and help the development of personalized therapeutic plans.
OpenAI GPT-2: An Almost Too Good Text Generator
🔗 OpenAI GPT-2: An Almost Too Good Text Generator
❤️ Support the show and pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 📝 The paper "Better Language Models and Their Implica...
🔗 OpenAI GPT-2: An Almost Too Good Text Generator
❤️ Support the show and pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 📝 The paper "Better Language Models and Their Implica...
YouTube
OpenAI GPT-2: An Almost Too Good Text Generator!
❤️ Support the show and pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers
📝 The paper "Better Language Models and Their Implications" is available here:
https://openai.com/blog/better-language-models/
GPT-2 Reddit bot:
htt…
📝 The paper "Better Language Models and Their Implications" is available here:
https://openai.com/blog/better-language-models/
GPT-2 Reddit bot:
htt…
🎥 Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC
👁 1 раз ⏳ 1891 сек.
👁 1 раз ⏳ 1891 сек.
In this video from the HPC User Forum, Satoshi Matsuoka from RIKEN presents: Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC and its Convergence with Big Data / AI.
"With rapid rise and increase of Big Data and AI as a new breed of high-performance workloads on supercomputers, we need to accommodate them at scale, and thus the need for R&D for HW and SW Infrastructures where traditional simulation-based HPC and BD/AI would converge, in a BYTES-oriented fashion. Post-K is the flagship next
Vk
Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC
In this video from the HPC User Forum, Satoshi Matsuoka from RIKEN presents: Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC and its Convergence with Big Data / AI.
"With rapid rise and increase of Big Data and AI as a new breed of high-performance…
"With rapid rise and increase of Big Data and AI as a new breed of high-performance…
🎥 Face Recogntion with OpenCV and Deep Learning in Python
👁 1 раз ⏳ 2446 сек.
👁 1 раз ⏳ 2446 сек.
#face_recognition #deeplearning #python
In this new session, we are going to learn how to perform face recognition in both images and video streams using: OpenCV, Python, and Deep Learning.
We’ll start with a brief discussion of how deep learning-based facial recognition works, including the concept of “deep metric learning”. From there, I will help you install the libraries you need to actually perform face recognition. Finally, we’ll implement face recognition for both still images and video streams.
To
Vk
Face Recogntion with OpenCV and Deep Learning in Python
#face_recognition #deeplearning #python
In this new session, we are going to learn how to perform face recognition in both images and video streams using: OpenCV, Python, and Deep Learning.
We’ll start with a brief discussion of how deep learning-based facial…
In this new session, we are going to learn how to perform face recognition in both images and video streams using: OpenCV, Python, and Deep Learning.
We’ll start with a brief discussion of how deep learning-based facial…
🎥 Introduction to Tensorflow 2.0 | Tensorflow 2.0 Features and Changes | Edureka
👁 1 раз ⏳ 831 сек.
👁 1 раз ⏳ 831 сек.
***AI and Deep Learning with TensorFlow - https://www.edureka.co/ai-deep-learning-with-tensorflow ***
This video will provide you with a short and summarized knowledge of tensorflow 2.0 alpha, what all changes have been made and how is it better from the previous version.
--------------------------------------------------
About the course:
Edureka's Deep Learning in TensorFlow with Python Certification Training is curated by industry professionals as per the industry requirements & demands. You will mas
Vk
Introduction to Tensorflow 2.0 | Tensorflow 2.0 Features and Changes | Edureka
***AI and Deep Learning with TensorFlow - https://www.edureka.co/ai-deep-learning-with-tensorflow ***
This video will provide you with a short and summarized knowledge of tensorflow 2.0 alpha, what all changes have been made and how is it better from the…
This video will provide you with a short and summarized knowledge of tensorflow 2.0 alpha, what all changes have been made and how is it better from the…
L2 Regularization and Batch Norm
🔗 L2 Regularization and Batch Norm
This blog post is about an interesting detail about machine learningthat I came across as a researcher at Jane Street - that of the interaction between L2 re...
🔗 L2 Regularization and Batch Norm
This blog post is about an interesting detail about machine learningthat I came across as a researcher at Jane Street - that of the interaction between L2 re...
Jane Street Blog
L2 Regularization and Batch Norm
This blog post is about an interesting detail about machine learningthat I came across as a researcher at Jane Street - that of the interaction between L2 re...
Машинное обучение для менеджеров: таинство сепуления
Очередной раз работая с компанией, делающей проект, связанный с машинным обучением (ML), я обратил внимание, что менеджеры используют термины из области ML, не понимая их сути. Хотя слова произносятся грамматически правильно и в нужных местах предложений, однако их смысл им не более ясен, чем назначение сепулек, которые, как известно, применяются в сепулькариях для сепуления. В тоже время тимлидам и простым разрабам кажется, что они говорят с менеджментом на одном языке, что и приводит к конфликтным ситуациям, так осложняющим работу над проектом. Итак, данная статья посвящена приемам фасилитации (с латинского: упрощение или облегчение) общения разработчиков с менеджментом или тому, как просто и доходчиво объяснить базовые термины ML, приведя тем самым ваш проект к успеху. Если вам близка эта тема — добро пожаловать под кат.
https://habr.com/ru/post/447094/
🔗 Машинное обучение для менеджеров: таинство сепуления
Введение Очередной раз работая с компанией, делающей проект, связанный с машинным обучением (ML), я обратил внимание, что менеджеры используют термины из области...
Очередной раз работая с компанией, делающей проект, связанный с машинным обучением (ML), я обратил внимание, что менеджеры используют термины из области ML, не понимая их сути. Хотя слова произносятся грамматически правильно и в нужных местах предложений, однако их смысл им не более ясен, чем назначение сепулек, которые, как известно, применяются в сепулькариях для сепуления. В тоже время тимлидам и простым разрабам кажется, что они говорят с менеджментом на одном языке, что и приводит к конфликтным ситуациям, так осложняющим работу над проектом. Итак, данная статья посвящена приемам фасилитации (с латинского: упрощение или облегчение) общения разработчиков с менеджментом или тому, как просто и доходчиво объяснить базовые термины ML, приведя тем самым ваш проект к успеху. Если вам близка эта тема — добро пожаловать под кат.
https://habr.com/ru/post/447094/
🔗 Машинное обучение для менеджеров: таинство сепуления
Введение Очередной раз работая с компанией, делающей проект, связанный с машинным обучением (ML), я обратил внимание, что менеджеры используют термины из области...
Хабр
Машинное обучение для менеджеров: таинство сепуления
Введение Очередной раз работая с компанией, делающей проект, связанный с машинным обучением (ML), я обратил внимание, что менеджеры используют термины из области...
Avik-Jain/100-Days-Of-ML-Code
🔗 Avik-Jain/100-Days-Of-ML-Code
100 Days of ML Coding. Contribute to Avik-Jain/100-Days-Of-ML-Code development by creating an account on GitHub.
🔗 Avik-Jain/100-Days-Of-ML-Code
100 Days of ML Coding. Contribute to Avik-Jain/100-Days-Of-ML-Code development by creating an account on GitHub.
GitHub
GitHub - Avik-Jain/100-Days-Of-ML-Code: 100 Days of ML Coding
100 Days of ML Coding. Contribute to Avik-Jain/100-Days-Of-ML-Code development by creating an account on GitHub.
🎥 [Коллоквиум] Математика больших данных тензоры, нейросети, байесовский вывод - Ветров Д.П.
👁 30715 раз ⏳ 4414 сек.
👁 30715 раз ⏳ 4414 сек.