AI, Python, Cognitive Neuroscience
3.88K subscribers
1.09K photos
47 videos
78 files
893 links
Download Telegram
Productionizing #NLP Models

https://bit.ly/2OkdRAD

โ‡๏ธ @AI_Python_EN
Another nice visual guide by Jay Alammar about how you can use BERT to do text classification. In particular, heโ€™s using DistilBERT to create sentence embeddings which is then used as an input for logistic regression. Code is also provided! Check it out! #deeplearning #machinelearning #NLP
๐Ÿ“ Article:
https://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/

โ‡๏ธ @AI_Python_EN
FacebookAI: Is the lottery ticket phenomenon a general property of DNNs or merely an artifact of supervised image classification? We show that the lottery ticket phenomenon is a general property which is present in both
#reinforcementlearning #NLP

https://arxiv.org/abs/1906.02768

โ‡๏ธ @AI_Python_EN
#GraphNeuralNetwork s for Natural Language Processing

#neuralnetwork #NLP

https://bit.ly/33oprRc

โ‡๏ธ @AI_Python_EN
As it turns out, Wang Ling was way ahead of the curve re NLP's muppet craze (see slides from LxMLS '16 & Oxford #NLP course '17 below).


https://github.com/oxford-cs-deepnlp-2017/lectures

โ‡๏ธ @AI_Python_EN
Transformers v2.2 is out, with *4* new models and seq2seq capabilities!

ALBERT is released alongside CamemBERT, implemented by the authors, DistilRoBERTa (twice as fast as RoBERTa-base!) and GPT-2 XL!

Encoder-decoder with
โญModel2Modelโญ

Available on

https://github.com/huggingface/transformers/releases/tag/v2.2.0

#NLP

โ‡๏ธ @AI_Python_EN
๐Ÿ“ข๐Ÿ“ข๐Ÿ“ข Twitter Cortex is creating a NLP Research team. Brand new #NLP Researcher๐Ÿ’ซ job posting๐Ÿ‘‡ Please spread the word.
https://careers.twitter.com/en/work-for-twitter/201911/machine-learning-researcher-nlp-cortex-applied-machine-learning.html

โ‡๏ธ @AI_Python_EN
Single Headed Attention RNN: Stop Thinking With Your Head

https://arxiv.org/abs/1911.11423

#ArtificialIntelligence #NeuralComputing #NLP


โ‡๏ธ @AI_Python_EN
ever wondered how we translate questions and commands into programs a machine can run? Jonathan Berant gives us an overview of (executable) semantic parsing.
#NLP

https://t.co/Mzvks7f9GR

โ‡๏ธ @AI_Python_EN
Very interesting use of #AI to tackle bias in the written text by substituting words automatically to more neutral wording. However, one must also consider the challenges and ramifications such technology could mean to the written language as it can not only accidentally change the meaning of what was written, it can also change the tone and expression of the author and neutralize the point-of-view and remove emotion from language.
#NLP
https://arxiv.org/pdf/1911.09709.pdf

โ‡๏ธ @AI_Python_EN
๐Ÿ”ฅ As you know ML has proven its importance in many fields, like computer vision, NLP, reinforcement learning, adversarial learning, etc .. Unfortunately, there is a little work to make machine learning accessible for Arabic-speaking people. Arabic language has many complicated features compared to other languages. First, Arabic language is written right to left. Second, it contains many letters that cannot be pronounced by most foreigners like ุถ ุŒ ุบ ุŒ ุญ ุŒ ุฎุŒ ุธ. Moreover, Arabic language contains special characters called Diacritics which are special characters that help readers pronounced words correctly. For instance the statement ุงู„ุณูŽู‘ู„ุงู…ู ุนูŽู„ูŽูŠู’ูƒูู…ู’ ูˆูŽุฑูŽุญู’ู…ูŽุฉู ุงู„ู„ูŽู‘ู‡ู ูˆูŽุจูŽุฑูŽูƒูŽุงุชูู‡ู containts special characters after most of the letters. The diactrics follow special rules to be given to a certain character. These rules are construct a complete area called ุงู„ู†ูŽู‘ุญู’ูˆู ุงู„ู’ุนูŽุฑูŽุจููŠูู‘. Compared to English, the Arabic language words letters are mostly connected ุงู„ู„ุบุฉ as making them disconnected is difficult to read ุง ู„ ู„ ุบ ุฉ. ArbML helps fixing this by implementing many open-source projects that support Arabic, ML and NLP.

https://github.com/zaidalyafeai/ARBML

#machinelearning #deeplearning #artificialintelligence #nlp

โ‡๏ธ @AI_Python_EN
Google Research โ€ข Representation Learning for Information Extraction from Templatic Documents such as receipts, bills, insurance quotes. We propose a novel approach using representation learning for tackling the problem of extracting structured information from form-like document images.

Blogpost

https://ai.googleblog.com/2020/06/extracting-structured-data-from.html?m=1

Paper

https://research.google/pubs/pub49122/
We propose an extraction system that uses knowledge of the types of the target fields to generate extraction candidates, and a neural network architecture that learns a dense representation of each candidate based on neighboring words in the document. These learned representations are not only useful in solving the extraction task for unseen document templates from two different domains, but are also interpretable, as we show using loss cases. #machinelearning #deeplearning #datascience #dataengineer #nlp
Google Research โ€ข Representation Learning for Information Extraction from Templatic Documents such as receipts, bills, insurance quotes. We propose a novel approach using representation learning for tackling the problem of extracting structured information from form-like document images.

Blogpost

https://ai.googleblog.com/2020/06/extracting-structured-data-from.html?m=1

Paper

https://research.google/pubs/pub49122/
We propose an extraction system that uses knowledge of the types of the target fields to generate extraction candidates, and a neural network architecture that learns a dense representation of each candidate based on neighboring words in the document. These learned representations are not only useful in solving the extraction task for unseen document templates from two different domains, but are also interpretable, as we show using loss cases. #machinelearning #deeplearning #datascience #dataengineer #nlp