AI, Python, Cognitive Neuroscience
3.88K subscribers
1.09K photos
47 videos
78 files
893 links
Download Telegram
Microsoft open-sourced scripts and notebooks to pre-train and finetune BERT natural language model with domain-specific texts

Github: https://github.com/microsoft/AzureML-BERT


#Bert #Microsoft #NLP #dl

✴️ @AI_PYTHON_EN
Self-supervised QA from Facebook AI

The researchers from Facebook AI published a paper with the results of exploring the idea of unsupervised extractive question answering and the following training of the supervised question answering model. This approach achieves 56.41F1 on SQuAD2 dataset.

Original paper:
https://research.fb.com/wp-content/uploads/2019/07/Unsupervised-Question-Answering-by-Cloze-Translation.pdf?
Code for experiments:
https://github.com/facebookresearch/UnsupervisedQA


#NLP #BERT #FacebookAI #SelfSupervised

❇️ @AI_Python_EN
Simple, Scalable Adaptation for Neural Machine Translation

Fine-tuning pre-trained Neural Machine Translation (NMT) models is the dominant approach for adapting to new languages and domains. However, fine-tuning requires adapting and maintaining a separate model for each target task. Researchers from Google propose a simple yet efficient approach for adaptation in #NMT. Their proposed approach consists of injecting tiny task specific adapter layers into a pre-trained model. These lightweight adapters, with just a small fraction of the original model size, adapt the model to multiple individual tasks simultaneously.

Guess it can be applied not only in #NMT but in many other #NLP, #NLU and #NLG tasks.

Paper: https://arxiv.org/pdf/1909.08478.pdf

#BERT

❇️ @AI_Python_EN
Communication-based Evaluation for Natural Language Generation (#NLG) that's dramatically out-performed standard n-gram-based methods.

Have you ever think that n-gram overlap measures like #BLEU or #ROUGE is not good enough for #NLG evaluation and human based evaluation is too expensive? Researchers from Stanford University also think so. The main shortcoming of #BLEU or #ROUGE methods is that they fail to take into account the communicative function of language; a speaker's goal is not only to produce well-formed expressions, but also to convey relevant information to a listener.

Researchers propose approach based on color reference game. In this game, a speaker and a listener see a set of three colors. The speaker is told one color is the target and tries to communicate the target to the listener using a natural language utterance. A good utterance is more likely to lead the listener to select the target, while a bad utterance is less likely to do so. In turn, effective metrics should assign high scores to good utterances and low scores to bad ones.

Paper: https://arxiv.org/pdf/1909.07290.pdf
Code: https://github.com/bnewm0609/comm-eval

#NLP #NLU

❇️ @AI_Python_EN
Omid Sarfarzadeh and Maysam Asgari-Chenaghlu , we will have a session on #DeepNLP and it’s applications to #SearchEngine and #Chatbot in #Google’s #DevFest, Istanbul. We will be honored to represent adesso Turkey. Thanks to Tufan K. and all adesso Turkey family to provide this chance for us. More information is provided as follows:
#DeepLearning #DeepNLP #NLP #chatbot #SearchEngine #adesso #adessoTurkey

https://devfest.istanbul
https://dfist19.firebaseapp.com/

@AI_Python_EN
Evaluating the Factual Consistency of Abstractive Text Summarization
https://lnkd.in/ewFMX8T

#ArtificialIntelligence #DeepLearning #NLP #NaturalLanguageProcessing

@AI_Python_EN
News classification using classic Machine Learning tools (TF-IDF) and modern NLP approach based on transfer learning (ULMFIT) deployed on GCP
Github:
https://github.com/imadelh/NLP-news-classification

Blog:
https://imadelhanafi.com/posts/text_classification_ulmfit/

#DeepLearning #MachineLearning #NLP

❇️ @AI_Python_EN
Productionizing #NLP Models

https://bit.ly/2OkdRAD

❇️ @AI_Python_EN
Another nice visual guide by Jay Alammar about how you can use BERT to do text classification. In particular, he’s using DistilBERT to create sentence embeddings which is then used as an input for logistic regression. Code is also provided! Check it out! #deeplearning #machinelearning #NLP
📝 Article:
https://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/

❇️ @AI_Python_EN
FacebookAI: Is the lottery ticket phenomenon a general property of DNNs or merely an artifact of supervised image classification? We show that the lottery ticket phenomenon is a general property which is present in both
#reinforcementlearning #NLP

https://arxiv.org/abs/1906.02768

❇️ @AI_Python_EN
As it turns out, Wang Ling was way ahead of the curve re NLP's muppet craze (see slides from LxMLS '16 & Oxford #NLP course '17 below).


https://github.com/oxford-cs-deepnlp-2017/lectures

❇️ @AI_Python_EN
Transformers v2.2 is out, with *4* new models and seq2seq capabilities!

ALBERT is released alongside CamemBERT, implemented by the authors, DistilRoBERTa (twice as fast as RoBERTa-base!) and GPT-2 XL!

Encoder-decoder with
Model2Model

Available on

https://github.com/huggingface/transformers/releases/tag/v2.2.0

#NLP

❇️ @AI_Python_EN
📢📢📢 Twitter Cortex is creating a NLP Research team. Brand new #NLP Researcher💫 job posting👇 Please spread the word.
https://careers.twitter.com/en/work-for-twitter/201911/machine-learning-researcher-nlp-cortex-applied-machine-learning.html

❇️ @AI_Python_EN
Single Headed Attention RNN: Stop Thinking With Your Head

https://arxiv.org/abs/1911.11423

#ArtificialIntelligence #NeuralComputing #NLP


❇️ @AI_Python_EN