4 Traits, qualities that a data scientist must seek ...
1) Technical bar: Data science teams work everyday in SQL, specifically in Postgres, and expect candidates to know Python/some fluency in some sort of statistical language. Also, someone who is really comfortable with querying really large datasets.
2) Communication: we’re in roles where a lot of our day-to-day is spent getting great insights or building models and communicating results of that to stakeholders, whether that’s product managers, marketing folks or finance. It’s super key that data science candidates have good communication skills.
3) Grit, tenacity and willingness to solve hard problems: Things that DS teams solve are generally hard problems. My hope is that anyone who joins the data science team is excited about hard problems and bumping against hard challenges.
4) Passion for the arts and passion for the mission: This is not the most important but great to have.
#datascience
❇️ @AI_Python_EN
1) Technical bar: Data science teams work everyday in SQL, specifically in Postgres, and expect candidates to know Python/some fluency in some sort of statistical language. Also, someone who is really comfortable with querying really large datasets.
2) Communication: we’re in roles where a lot of our day-to-day is spent getting great insights or building models and communicating results of that to stakeholders, whether that’s product managers, marketing folks or finance. It’s super key that data science candidates have good communication skills.
3) Grit, tenacity and willingness to solve hard problems: Things that DS teams solve are generally hard problems. My hope is that anyone who joins the data science team is excited about hard problems and bumping against hard challenges.
4) Passion for the arts and passion for the mission: This is not the most important but great to have.
#datascience
❇️ @AI_Python_EN
Vanishing/exploring gradients problem is a well often problem especially when training big networks, so visualizing gradients is a must when training neural networks. Here is the small network's on MNIST dataset gradients flow. A detailed article is on the way to explain many things in deep learning.
#machinelearning #deeplearning #artificialintelligence #computervision #neuralnetwork
❇️ @AI_Python_EN
#machinelearning #deeplearning #artificialintelligence #computervision #neuralnetwork
❇️ @AI_Python_EN
FacebookAI: Is the lottery ticket phenomenon a general property of DNNs or merely an artifact of supervised image classification? We show that the lottery ticket phenomenon is a general property which is present in both
#reinforcementlearning #NLP
https://arxiv.org/abs/1906.02768
❇️ @AI_Python_EN
#reinforcementlearning #NLP
https://arxiv.org/abs/1906.02768
❇️ @AI_Python_EN
The Mind at Work: Guido van Rossum on how Python makes thinking in code easier
https://blog.dropbox.com/topics/work-culture/-the-mind-at-work--guido-van-rossum-on-how-python-makes-thinking
#Python
❇️ @AI_Python_EN
https://blog.dropbox.com/topics/work-culture/-the-mind-at-work--guido-van-rossum-on-how-python-makes-thinking
#Python
❇️ @AI_Python_EN
New State of the Art AI Optimizer: Rectified Adam (RAdam) Improve your AI accuracy instantly versus Adam, and why it works. Blog by Less Wright :
https://medium.com/@lessw/new-state-of-the-art-ai-optimizer-rectified-adam-radam-5d854730807b
#MachineLearning #TensorFlow #Pytorch #DeepLearning
❇️ @AI_Python_EN
https://medium.com/@lessw/new-state-of-the-art-ai-optimizer-rectified-adam-radam-5d854730807b
#MachineLearning #TensorFlow #Pytorch #DeepLearning
❇️ @AI_Python_EN
#GraphNeuralNetwork s for Natural Language Processing
#neuralnetwork #NLP
https://bit.ly/33oprRc
❇️ @AI_Python_EN
#neuralnetwork #NLP
https://bit.ly/33oprRc
❇️ @AI_Python_EN
As it turns out, Wang Ling was way ahead of the curve re NLP's muppet craze (see slides from LxMLS '16 & Oxford #NLP course '17 below).
https://github.com/oxford-cs-deepnlp-2017/lectures
❇️ @AI_Python_EN
https://github.com/oxford-cs-deepnlp-2017/lectures
❇️ @AI_Python_EN
Transformers v2.2 is out, with *4* new models and seq2seq capabilities!
ALBERT is released alongside CamemBERT, implemented by the authors, DistilRoBERTa (twice as fast as RoBERTa-base!) and GPT-2 XL!
Encoder-decoder with
⭐Model2Model⭐
Available on
https://github.com/huggingface/transformers/releases/tag/v2.2.0
#NLP
❇️ @AI_Python_EN
ALBERT is released alongside CamemBERT, implemented by the authors, DistilRoBERTa (twice as fast as RoBERTa-base!) and GPT-2 XL!
Encoder-decoder with
⭐Model2Model⭐
Available on
https://github.com/huggingface/transformers/releases/tag/v2.2.0
#NLP
❇️ @AI_Python_EN
📢📢📢 Twitter Cortex is creating a NLP Research team. Brand new #NLP Researcher💫 job posting👇 Please spread the word.
https://careers.twitter.com/en/work-for-twitter/201911/machine-learning-researcher-nlp-cortex-applied-machine-learning.html
❇️ @AI_Python_EN
https://careers.twitter.com/en/work-for-twitter/201911/machine-learning-researcher-nlp-cortex-applied-machine-learning.html
❇️ @AI_Python_EN
Single Headed Attention RNN: Stop Thinking With Your Head
https://arxiv.org/abs/1911.11423
#ArtificialIntelligence #NeuralComputing #NLP
❇️ @AI_Python_EN
https://arxiv.org/abs/1911.11423
#ArtificialIntelligence #NeuralComputing #NLP
❇️ @AI_Python_EN
Lit BERT: NLP Transfer Learning In 3 Steps Blog by William Falcon :
https://towardsdatascience.com/lit-bert-nlp-transfer-learning-in-3-steps-272a866570db
#MachineLearning #ArtificialIntelligence #NLP
❇️ @AI_Python_EN
https://towardsdatascience.com/lit-bert-nlp-transfer-learning-in-3-steps-272a866570db
#MachineLearning #ArtificialIntelligence #NLP
❇️ @AI_Python_EN
Microsoft: Actor critic method bests greedy exploration in #reinforcementlearning
http://bit.ly/2sfxt17
#DataScience #MachineLearning #ArtificialIntelligence
❇️ @AI_Python_EN
http://bit.ly/2sfxt17
#DataScience #MachineLearning #ArtificialIntelligence
❇️ @AI_Python_EN
In #datascience, you must understand context. There are times at work where looking at the data alone didn't help me from solving the problem.
It doesn't matter if your domain is in marketing, healthcare, product, etc... You need to understand the context first before diving into the data. Without background information about how the data was generated, it becomes really difficult to make accurate assumptions on what your data will show.
Taking the time to understand the context will not only benefit you in your analysis, but you may even help your colleagues tackle the problem better.
When you are informed about the data and problem, you increase your value because now you're in a position to communicate and identify other potential problems.
So do this:
On your next project, take the time to not just do EDA, but also document your understanding of the context behind the data.
This good practice will definitely help you in your career and is a valuable skill you can bring to any team.
Context first, data second.
❇️ @AI_Python_EN
It doesn't matter if your domain is in marketing, healthcare, product, etc... You need to understand the context first before diving into the data. Without background information about how the data was generated, it becomes really difficult to make accurate assumptions on what your data will show.
Taking the time to understand the context will not only benefit you in your analysis, but you may even help your colleagues tackle the problem better.
When you are informed about the data and problem, you increase your value because now you're in a position to communicate and identify other potential problems.
So do this:
On your next project, take the time to not just do EDA, but also document your understanding of the context behind the data.
This good practice will definitely help you in your career and is a valuable skill you can bring to any team.
Context first, data second.
❇️ @AI_Python_EN
Mohammad sadegh rasouli:
Interested to intern facebookai Our team, LATTE (language and translation technologies), is hiring research interns for summer 2020.
Requirement: PhD student + strong publication record
Please send an email to rasooli@facebook.com if interested.
❇️ @AI_Python_EN
Interested to intern facebookai Our team, LATTE (language and translation technologies), is hiring research interns for summer 2020.
Requirement: PhD student + strong publication record
Please send an email to rasooli@facebook.com if interested.
❇️ @AI_Python_EN
ever wondered how we translate questions and commands into programs a machine can run? Jonathan Berant gives us an overview of (executable) semantic parsing.
#NLP
https://t.co/Mzvks7f9GR
❇️ @AI_Python_EN
#NLP
https://t.co/Mzvks7f9GR
❇️ @AI_Python_EN
Here is a great explanation of how to combine Transformers and fastai to get great results from your NLP models
https://towardsdatascience.com/fastai-with-transformers-bert-roberta-xlnet-xlm-distilbert-4f41ee18ecb2
https://towardsdatascience.com/fastai-with-transformers-bert-roberta-xlnet-xlm-distilbert-4f41ee18ecb2
Free 81-page guide on learning #ComputerVision, #DeepLearning, and #OpenCV!
Includes step-by-step instructions on:
- Getting Started
- Face Applications
- Object Detection
- OCR
- Embedded/IoT
- ...and more
https://www.pyimagesearch.com/start-here
Includes step-by-step instructions on:
- Getting Started
- Face Applications
- Object Detection
- OCR
- Embedded/IoT
- ...and more
https://www.pyimagesearch.com/start-here
It should be really useful as according to this paper
https://arxiv.org/abs/1905.05583, the unsupervised finetuning and layer wise LR , and one-cycle are crucial for BERT performance. They mange to beat ULMFiT on IMDB with BERT-Base
https://arxiv.org/abs/1905.05583, the unsupervised finetuning and layer wise LR , and one-cycle are crucial for BERT performance. They mange to beat ULMFiT on IMDB with BERT-Base