Single Headed Attention RNN: Stop Thinking With Your Head
https://arxiv.org/abs/1911.11423
#ArtificialIntelligence #NeuralComputing #NLP
❇️ @AI_Python_EN
https://arxiv.org/abs/1911.11423
#ArtificialIntelligence #NeuralComputing #NLP
❇️ @AI_Python_EN
Lit BERT: NLP Transfer Learning In 3 Steps Blog by William Falcon :
https://towardsdatascience.com/lit-bert-nlp-transfer-learning-in-3-steps-272a866570db
#MachineLearning #ArtificialIntelligence #NLP
❇️ @AI_Python_EN
https://towardsdatascience.com/lit-bert-nlp-transfer-learning-in-3-steps-272a866570db
#MachineLearning #ArtificialIntelligence #NLP
❇️ @AI_Python_EN
Microsoft: Actor critic method bests greedy exploration in #reinforcementlearning
http://bit.ly/2sfxt17
#DataScience #MachineLearning #ArtificialIntelligence
❇️ @AI_Python_EN
http://bit.ly/2sfxt17
#DataScience #MachineLearning #ArtificialIntelligence
❇️ @AI_Python_EN
In #datascience, you must understand context. There are times at work where looking at the data alone didn't help me from solving the problem.
It doesn't matter if your domain is in marketing, healthcare, product, etc... You need to understand the context first before diving into the data. Without background information about how the data was generated, it becomes really difficult to make accurate assumptions on what your data will show.
Taking the time to understand the context will not only benefit you in your analysis, but you may even help your colleagues tackle the problem better.
When you are informed about the data and problem, you increase your value because now you're in a position to communicate and identify other potential problems.
So do this:
On your next project, take the time to not just do EDA, but also document your understanding of the context behind the data.
This good practice will definitely help you in your career and is a valuable skill you can bring to any team.
Context first, data second.
❇️ @AI_Python_EN
It doesn't matter if your domain is in marketing, healthcare, product, etc... You need to understand the context first before diving into the data. Without background information about how the data was generated, it becomes really difficult to make accurate assumptions on what your data will show.
Taking the time to understand the context will not only benefit you in your analysis, but you may even help your colleagues tackle the problem better.
When you are informed about the data and problem, you increase your value because now you're in a position to communicate and identify other potential problems.
So do this:
On your next project, take the time to not just do EDA, but also document your understanding of the context behind the data.
This good practice will definitely help you in your career and is a valuable skill you can bring to any team.
Context first, data second.
❇️ @AI_Python_EN
Mohammad sadegh rasouli:
Interested to intern facebookai Our team, LATTE (language and translation technologies), is hiring research interns for summer 2020.
Requirement: PhD student + strong publication record
Please send an email to rasooli@facebook.com if interested.
❇️ @AI_Python_EN
Interested to intern facebookai Our team, LATTE (language and translation technologies), is hiring research interns for summer 2020.
Requirement: PhD student + strong publication record
Please send an email to rasooli@facebook.com if interested.
❇️ @AI_Python_EN
ever wondered how we translate questions and commands into programs a machine can run? Jonathan Berant gives us an overview of (executable) semantic parsing.
#NLP
https://t.co/Mzvks7f9GR
❇️ @AI_Python_EN
#NLP
https://t.co/Mzvks7f9GR
❇️ @AI_Python_EN
Here is a great explanation of how to combine Transformers and fastai to get great results from your NLP models
https://towardsdatascience.com/fastai-with-transformers-bert-roberta-xlnet-xlm-distilbert-4f41ee18ecb2
https://towardsdatascience.com/fastai-with-transformers-bert-roberta-xlnet-xlm-distilbert-4f41ee18ecb2
Free 81-page guide on learning #ComputerVision, #DeepLearning, and #OpenCV!
Includes step-by-step instructions on:
- Getting Started
- Face Applications
- Object Detection
- OCR
- Embedded/IoT
- ...and more
https://www.pyimagesearch.com/start-here
Includes step-by-step instructions on:
- Getting Started
- Face Applications
- Object Detection
- OCR
- Embedded/IoT
- ...and more
https://www.pyimagesearch.com/start-here
It should be really useful as according to this paper
https://arxiv.org/abs/1905.05583, the unsupervised finetuning and layer wise LR , and one-cycle are crucial for BERT performance. They mange to beat ULMFiT on IMDB with BERT-Base
https://arxiv.org/abs/1905.05583, the unsupervised finetuning and layer wise LR , and one-cycle are crucial for BERT performance. They mange to beat ULMFiT on IMDB with BERT-Base
A good introduction to #MachineLearning and its 4 approaches:
https://towardsdatascience.com/machine-learning-an-introduction-23b84d51e6d0?gi=10a5fcd4decd
#BigData #DataScience #AI #Algorithms #ReinforcementLearning
❇️ @AI_Python_EN
https://towardsdatascience.com/machine-learning-an-introduction-23b84d51e6d0?gi=10a5fcd4decd
#BigData #DataScience #AI #Algorithms #ReinforcementLearning
❇️ @AI_Python_EN
Want to see how downstream results are affected by LSTM LM training configurations?
Save time/compute: use 125 pretrained LSTM LMs.
https://zenodo.org/record/3556943
❇️ @AI_Python_EN
Save time/compute: use 125 pretrained LSTM LMs.
https://zenodo.org/record/3556943
❇️ @AI_Python_EN
Yoshua explains how #DeepLearning has developed in 2019
https://www.youtube.com/watch?v=eKMA1Tscdag
❇️ @AI_Python_EN
https://www.youtube.com/watch?v=eKMA1Tscdag
❇️ @AI_Python_EN
DEBATE : Yoshua Bengio | Gary Marcus Pre-readings recommended to the audience before the Debate :
Yoshua Bengio | Gary Marcus
This Is The Debate The #AI World Has Been Waiting For
❇️ @AI_Python_EN
Yoshua Bengio | Gary Marcus
This Is The Debate The #AI World Has Been Waiting For
❇️ @AI_Python_EN
Machine Learning Models
☞ https://morioh.com/p/1dc7518426c2
#TensorFlow #machinelearning
❇️ @AI_Python_EN
☞ https://morioh.com/p/1dc7518426c2
#TensorFlow #machinelearning
❇️ @AI_Python_EN
nbdev: use Jupyter Notebooks for everything
https://www.fast.ai//2019/12/02/nbdev/
https://github.com/fastai/nbdev/
❇️ @AI_Python_EN
https://www.fast.ai//2019/12/02/nbdev/
https://github.com/fastai/nbdev/
❇️ @AI_Python_EN
Jupyter on Steroids: Create Packages, Tests, and Rich Documents https://t.co/w3K6D0Cgp6
Hackernoon
#Jupyter on Steroids: Create Packages, Tests, and Rich Documents | Hacker Noon
"I really do think [nbdev] is a huge step forward for programming environments": Chris Lattner, inventor of Swift, LLVM, and Swift Playgrounds.
Identifying Hate Speech with BERT and CNN
https://link.medium.com/7FaReCD781
https://link.medium.com/7FaReCD781
Medium
Identifying Hate Speech with BERT and CNN
A tool that can help us to recognize online abuse and harassment by analyzing text
💡 What's the difference between bagging and boosting?
Bagging and boosting are both ensemble methods, meaning they combine many weak predictors to create a strong predictor.
One key difference is that bagging builds independent models in parallel and "averages" their results in the end, whereas boosting builds models sequentially, at each step emphasizing reducing error that remains in the model by better fitting to the observations that were missed in previous steps.
❇️ @AI_Python_EN
Bagging and boosting are both ensemble methods, meaning they combine many weak predictors to create a strong predictor.
One key difference is that bagging builds independent models in parallel and "averages" their results in the end, whereas boosting builds models sequentially, at each step emphasizing reducing error that remains in the model by better fitting to the observations that were missed in previous steps.
❇️ @AI_Python_EN