Cutting Edge Deep Learning
262 subscribers
193 photos
42 videos
51 files
363 links
πŸ“• Deep learning
πŸ“— Reinforcement learning
πŸ“˜ Machine learning
πŸ“™ Papers - tools - tutorials

πŸ”— Other Social Media Handles:
https://linktr.ee/cedeeplearning
Download Telegram
πŸ”ΉUsing LIME to Understand a Machine Learning Model’s #Predictions

Using a record explainer mechanism like Local Interpretable #Model_Agnostic Explanations (LIME) is an important technique to filter through the predicted outcomes from any machine learning model. This technique is powerful and fair because it focuses more on the inputs and outputs from the model, rather than on the model itself.
#LIME works by making small tweaks to the input #data and then observing the impact on the output data. By #filtering through the model’s findings and delivering a more digestible explanation, humans can better gauge which predictions to trust and which will be the most valuable for the organization.
β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”
πŸ“ŒVia: @cedeeplearning
πŸ“ŒOther social media: https://linktr.ee/cedeeplearning

link: https://www.rocketsource.co/blog/machine-learning-models/

#machinelearning
#datascience
#deeplearning
#AI
πŸ”ΉJump-start Training for #Speech_Recognition Models in Different Languages with NVIDIA NeMo

πŸ–ŠBy Oleksii Kuchaiev

Transfer learning is an important machine learning technique that uses a model’s knowledge of one task to make it perform better on another. Fine-tuning is one of the techniques to perform transfer learning.
β€”β€”β€”β€”β€”β€”β€”
πŸ“ŒVia: @cedeeplearning

https://devblogs.nvidia.com/jump-start-training-for-speech-recognition-models-with-nemo/

#deeplearning #neuralnetworks
#machinelearning #NVIDIA
#AI #datascience #math
#nemo #model #data
πŸ”ΉAnnouncing NVIDIA NeMo: Fast Development of Speech and Language Models

πŸ–ŠBy Raghav Mani

πŸ”»The inputs and outputs, coding style, and data processing layers in these models may not be compatible with each other. Worse still, you may be able to wire up these models in your code in such a way that it technically β€œworks” but is in fact semantically wrong. A lot of time, effort, and duplicated code goes into making sure that you are reusing models safely.

πŸ”»Build a simple ASR model to see how to use NeMo. You see how neural types provide semantic safety checks, and how the tool can scale out to multiple GPUs with minimal effort.
β€”β€”β€”β€”β€”β€”β€”
πŸ“ŒVia: @cedeeplearning

https://devblogs.nvidia.com/announcing-nemo-fast-development-of-speech-and-language-models/

#deeplearning #neuralnetworks
#machinelearning #NVIDIA
#AI #datascience #math
#nemo #model #data
⭕️ Top 6 Open Source Pre-trained Models for Text Classification you should use

1. XLNet
2. ERNIE
3. Text-to-Text Transfer Transformer (T5)
4. Binary - Partitioning Transformation (BPT)
5. Neural Attentive Bag-of-Entities Model for Text Classification (NABoE)
6. Rethinking Complex Neural Network Architectures for Document Classification
β€”β€”β€”β€”β€”β€”β€”β€”
πŸ“ŒVia: @cedeeplearning


https://www.analyticsvidhya.com/blog/2020/03/6-pretrained-models-text-classification/

#classification #machinelearning
#datascience #model #training
#deeplearning #dataset #neuralnetworks
#NLP #math #AI
⭕️ A foolproof way to shrink deep learning models

​Researchers unveil a pruning algorithm to make artificial intelligence applications run faster.

πŸ–‹By Kim Martineau

As more artificial intelligence applications move to smartphones, deep learning models are getting smaller to allow apps to run faster and save battery power. Now, MIT researchers have a new and better way to compress models.
β€”β€”β€”β€”β€”β€”β€”β€”
πŸ“ŒVia: @cedeeplearning

http://news.mit.edu/2020/foolproof-way-shrink-deep-learning-models-0430

#deeplearning #AI #model
#MIT #machinelearning
#datascience #neuralnetworks
#algorithm #research